Dec 04 15:35:54 crc systemd[1]: Starting Kubernetes Kubelet... Dec 04 15:35:54 crc restorecon[4681]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:54 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 04 15:35:55 crc restorecon[4681]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 04 15:35:57 crc kubenswrapper[4878]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.016683 4878 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019675 4878 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019696 4878 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019701 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019705 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019710 4878 feature_gate.go:330] unrecognized feature gate: Example Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019715 4878 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019719 4878 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019729 4878 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019733 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019737 4878 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019740 4878 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019744 4878 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019748 4878 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019752 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019755 4878 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019759 4878 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019762 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019767 4878 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019772 4878 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019776 4878 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019781 4878 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019785 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019789 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019793 4878 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019797 4878 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019802 4878 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019806 4878 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019810 4878 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019814 4878 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019817 4878 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019821 4878 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019825 4878 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019828 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019831 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019835 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019839 4878 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019844 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019848 4878 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019851 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019855 4878 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019859 4878 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019862 4878 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019866 4878 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019883 4878 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019887 4878 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019892 4878 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019896 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019900 4878 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019904 4878 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019907 4878 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019911 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019914 4878 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019917 4878 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019922 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019927 4878 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019930 4878 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019934 4878 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019939 4878 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019943 4878 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019949 4878 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019955 4878 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019972 4878 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019977 4878 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019980 4878 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019984 4878 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019988 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019992 4878 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.019996 4878 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.020001 4878 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.020004 4878 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.020008 4878 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020100 4878 flags.go:64] FLAG: --address="0.0.0.0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020110 4878 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020126 4878 flags.go:64] FLAG: --anonymous-auth="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020133 4878 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020140 4878 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020145 4878 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020152 4878 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020160 4878 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020165 4878 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020169 4878 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020174 4878 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020178 4878 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020183 4878 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020187 4878 flags.go:64] FLAG: --cgroup-root="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020192 4878 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020196 4878 flags.go:64] FLAG: --client-ca-file="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020201 4878 flags.go:64] FLAG: --cloud-config="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020207 4878 flags.go:64] FLAG: --cloud-provider="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020212 4878 flags.go:64] FLAG: --cluster-dns="[]" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020223 4878 flags.go:64] FLAG: --cluster-domain="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020228 4878 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020234 4878 flags.go:64] FLAG: --config-dir="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020239 4878 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020245 4878 flags.go:64] FLAG: --container-log-max-files="5" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020254 4878 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020259 4878 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020264 4878 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020269 4878 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020275 4878 flags.go:64] FLAG: --contention-profiling="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020280 4878 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020285 4878 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020298 4878 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020303 4878 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020310 4878 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020315 4878 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020320 4878 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020325 4878 flags.go:64] FLAG: --enable-load-reader="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020330 4878 flags.go:64] FLAG: --enable-server="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020336 4878 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020347 4878 flags.go:64] FLAG: --event-burst="100" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020352 4878 flags.go:64] FLAG: --event-qps="50" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020357 4878 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020363 4878 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020369 4878 flags.go:64] FLAG: --eviction-hard="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020376 4878 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020381 4878 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020386 4878 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020391 4878 flags.go:64] FLAG: --eviction-soft="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020396 4878 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020401 4878 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020406 4878 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020410 4878 flags.go:64] FLAG: --experimental-mounter-path="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020415 4878 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020420 4878 flags.go:64] FLAG: --fail-swap-on="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020425 4878 flags.go:64] FLAG: --feature-gates="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020434 4878 flags.go:64] FLAG: --file-check-frequency="20s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020439 4878 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020445 4878 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020450 4878 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020455 4878 flags.go:64] FLAG: --healthz-port="10248" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020461 4878 flags.go:64] FLAG: --help="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020467 4878 flags.go:64] FLAG: --hostname-override="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020473 4878 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020479 4878 flags.go:64] FLAG: --http-check-frequency="20s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020485 4878 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020490 4878 flags.go:64] FLAG: --image-credential-provider-config="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020495 4878 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020508 4878 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020513 4878 flags.go:64] FLAG: --image-service-endpoint="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020518 4878 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020524 4878 flags.go:64] FLAG: --kube-api-burst="100" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020529 4878 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020534 4878 flags.go:64] FLAG: --kube-api-qps="50" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020539 4878 flags.go:64] FLAG: --kube-reserved="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020545 4878 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020550 4878 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020555 4878 flags.go:64] FLAG: --kubelet-cgroups="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020560 4878 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020565 4878 flags.go:64] FLAG: --lock-file="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020570 4878 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020575 4878 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020580 4878 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020588 4878 flags.go:64] FLAG: --log-json-split-stream="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020593 4878 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020598 4878 flags.go:64] FLAG: --log-text-split-stream="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020605 4878 flags.go:64] FLAG: --logging-format="text" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020610 4878 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020616 4878 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020621 4878 flags.go:64] FLAG: --manifest-url="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020626 4878 flags.go:64] FLAG: --manifest-url-header="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020633 4878 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020638 4878 flags.go:64] FLAG: --max-open-files="1000000" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020645 4878 flags.go:64] FLAG: --max-pods="110" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020650 4878 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020655 4878 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020667 4878 flags.go:64] FLAG: --memory-manager-policy="None" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020673 4878 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020678 4878 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020683 4878 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020688 4878 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020701 4878 flags.go:64] FLAG: --node-status-max-images="50" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020707 4878 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020712 4878 flags.go:64] FLAG: --oom-score-adj="-999" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020725 4878 flags.go:64] FLAG: --pod-cidr="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020730 4878 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020738 4878 flags.go:64] FLAG: --pod-manifest-path="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020743 4878 flags.go:64] FLAG: --pod-max-pids="-1" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020754 4878 flags.go:64] FLAG: --pods-per-core="0" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020759 4878 flags.go:64] FLAG: --port="10250" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020764 4878 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020769 4878 flags.go:64] FLAG: --provider-id="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020774 4878 flags.go:64] FLAG: --qos-reserved="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020779 4878 flags.go:64] FLAG: --read-only-port="10255" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020784 4878 flags.go:64] FLAG: --register-node="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020789 4878 flags.go:64] FLAG: --register-schedulable="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020794 4878 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020805 4878 flags.go:64] FLAG: --registry-burst="10" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020810 4878 flags.go:64] FLAG: --registry-qps="5" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020815 4878 flags.go:64] FLAG: --reserved-cpus="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020820 4878 flags.go:64] FLAG: --reserved-memory="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020826 4878 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020832 4878 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020837 4878 flags.go:64] FLAG: --rotate-certificates="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020841 4878 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020846 4878 flags.go:64] FLAG: --runonce="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020851 4878 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020856 4878 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020861 4878 flags.go:64] FLAG: --seccomp-default="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020865 4878 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020886 4878 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020890 4878 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020894 4878 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020899 4878 flags.go:64] FLAG: --storage-driver-password="root" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020903 4878 flags.go:64] FLAG: --storage-driver-secure="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020907 4878 flags.go:64] FLAG: --storage-driver-table="stats" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020910 4878 flags.go:64] FLAG: --storage-driver-user="root" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020914 4878 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020918 4878 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020923 4878 flags.go:64] FLAG: --system-cgroups="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020941 4878 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020947 4878 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020951 4878 flags.go:64] FLAG: --tls-cert-file="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020955 4878 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020963 4878 flags.go:64] FLAG: --tls-min-version="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020967 4878 flags.go:64] FLAG: --tls-private-key-file="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020972 4878 flags.go:64] FLAG: --topology-manager-policy="none" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020976 4878 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020981 4878 flags.go:64] FLAG: --topology-manager-scope="container" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020986 4878 flags.go:64] FLAG: --v="2" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020993 4878 flags.go:64] FLAG: --version="false" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.020999 4878 flags.go:64] FLAG: --vmodule="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.021004 4878 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.021009 4878 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021147 4878 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021156 4878 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021161 4878 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021165 4878 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021170 4878 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021176 4878 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021181 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021185 4878 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021190 4878 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021194 4878 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021199 4878 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021204 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021209 4878 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021213 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021217 4878 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021222 4878 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021226 4878 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021230 4878 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021236 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021239 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021243 4878 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021247 4878 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021257 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021261 4878 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021265 4878 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021269 4878 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021274 4878 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021278 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021281 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021285 4878 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021289 4878 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021295 4878 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021300 4878 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021309 4878 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021316 4878 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021320 4878 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021325 4878 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021329 4878 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021332 4878 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021335 4878 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021339 4878 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021342 4878 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021346 4878 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021349 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021353 4878 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021356 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021360 4878 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021364 4878 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021367 4878 feature_gate.go:330] unrecognized feature gate: Example Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021371 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021377 4878 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021380 4878 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021384 4878 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021387 4878 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021390 4878 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021394 4878 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021397 4878 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021401 4878 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021406 4878 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021410 4878 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021414 4878 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021418 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021423 4878 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021428 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021432 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021437 4878 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021441 4878 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021444 4878 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021448 4878 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021452 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.021456 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.021471 4878 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.028517 4878 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.028592 4878 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028685 4878 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028696 4878 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028704 4878 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028717 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028724 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028729 4878 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028736 4878 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028742 4878 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028748 4878 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028754 4878 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028759 4878 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028765 4878 feature_gate.go:330] unrecognized feature gate: Example Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028771 4878 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028778 4878 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028784 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028790 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028797 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028802 4878 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028807 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028812 4878 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028817 4878 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028821 4878 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028826 4878 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028830 4878 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028835 4878 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028847 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028853 4878 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028857 4878 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028862 4878 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028882 4878 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028888 4878 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028893 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028898 4878 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028904 4878 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028909 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028914 4878 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028919 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028924 4878 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028930 4878 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028935 4878 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028939 4878 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028944 4878 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028949 4878 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028954 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028959 4878 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028963 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028967 4878 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028972 4878 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028979 4878 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028989 4878 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.028995 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029000 4878 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029005 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029010 4878 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029015 4878 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029019 4878 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029024 4878 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029028 4878 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029034 4878 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029038 4878 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029044 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029049 4878 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029054 4878 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029059 4878 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029064 4878 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029070 4878 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029075 4878 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029080 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029085 4878 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029090 4878 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029095 4878 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.029106 4878 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029305 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029316 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029322 4878 feature_gate.go:330] unrecognized feature gate: Example Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029328 4878 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029333 4878 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029338 4878 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029342 4878 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029347 4878 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029352 4878 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029357 4878 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029363 4878 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029367 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029371 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029375 4878 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029380 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029384 4878 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029389 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029395 4878 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029399 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029405 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029409 4878 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029413 4878 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029418 4878 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029422 4878 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029429 4878 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029436 4878 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029441 4878 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029446 4878 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029451 4878 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029456 4878 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029460 4878 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029465 4878 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029469 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029474 4878 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029478 4878 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029483 4878 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029487 4878 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029492 4878 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029496 4878 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029501 4878 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029505 4878 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029509 4878 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029517 4878 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029522 4878 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029527 4878 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029532 4878 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029537 4878 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029542 4878 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029547 4878 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029551 4878 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029555 4878 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029560 4878 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029565 4878 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029569 4878 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029574 4878 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029578 4878 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029583 4878 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029589 4878 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029595 4878 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029601 4878 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029605 4878 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029612 4878 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029618 4878 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029624 4878 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029631 4878 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029635 4878 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029640 4878 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029644 4878 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029648 4878 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029651 4878 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.029655 4878 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.029660 4878 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.030164 4878 server.go:940] "Client rotation is on, will bootstrap in background" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.036266 4878 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.036409 4878 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.037047 4878 server.go:997] "Starting client certificate rotation" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.037077 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.037256 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 01:11:45.631316049 +0000 UTC Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.037368 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.048438 4878 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.050178 4878 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.051550 4878 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.060476 4878 log.go:25] "Validated CRI v1 runtime API" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.080473 4878 log.go:25] "Validated CRI v1 image API" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.082251 4878 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.086088 4878 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-04-15-31-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.086129 4878 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.106537 4878 manager.go:217] Machine: {Timestamp:2025-12-04 15:35:57.104771416 +0000 UTC m=+1.067308392 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1031ff9d-cccb-4da2-a988-194843f64ced BootID:96c4f62a-170b-46e9-91e9-d7457aac55d0 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:63:8d:77 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:63:8d:77 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ab:ee:6f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:85:33:e4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:24:e4:d9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:f4:99:01 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:b2:e0:c6:81:67:23 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:4b:83:f7:30:62 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.106809 4878 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.106979 4878 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.107486 4878 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.107653 4878 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.107696 4878 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.107924 4878 topology_manager.go:138] "Creating topology manager with none policy" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.107938 4878 container_manager_linux.go:303] "Creating device plugin manager" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.108029 4878 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.108651 4878 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.108816 4878 state_mem.go:36] "Initialized new in-memory state store" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.108926 4878 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.110616 4878 kubelet.go:418] "Attempting to sync node with API server" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.110644 4878 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.110667 4878 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.110685 4878 kubelet.go:324] "Adding apiserver pod source" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.110698 4878 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.112570 4878 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.113151 4878 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.114798 4878 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.115122 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.115256 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.115225 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.115351 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115676 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115712 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115726 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115740 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115763 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115782 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115796 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115844 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115860 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115898 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115959 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.115975 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.116522 4878 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.117480 4878 server.go:1280] "Started kubelet" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.118001 4878 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.117993 4878 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.118534 4878 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.119116 4878 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.119481 4878 server.go:460] "Adding debug handlers to kubelet server" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.119614 4878 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e0d25225f3445 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 15:35:57.117428805 +0000 UTC m=+1.079965771,LastTimestamp:2025-12-04 15:35:57.117428805 +0000 UTC m=+1.079965771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120011 4878 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120046 4878 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 04 15:35:57 crc systemd[1]: Started Kubernetes Kubelet. Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120370 4878 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120392 4878 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.120369 4878 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120567 4878 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.120984 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.121075 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.120210 4878 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 01:46:58.167796415 +0000 UTC Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.121800 4878 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 130h11m1.046007466s for next certificate rotation Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.122328 4878 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.122370 4878 factory.go:55] Registering systemd factory Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.122384 4878 factory.go:221] Registration of the systemd container factory successfully Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.122259 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.123958 4878 factory.go:153] Registering CRI-O factory Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.124171 4878 factory.go:221] Registration of the crio container factory successfully Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.124334 4878 factory.go:103] Registering Raw factory Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.124548 4878 manager.go:1196] Started watching for new ooms in manager Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.126753 4878 manager.go:319] Starting recovery of all containers Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135442 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135548 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135565 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135583 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135596 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135607 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135620 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135633 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135652 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135667 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135688 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135699 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135711 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135729 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135744 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135759 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135774 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135791 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135805 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135821 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135841 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135859 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135896 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.135911 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136243 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136267 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136288 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136307 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136342 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136356 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136370 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136384 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136397 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136411 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136424 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136440 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136451 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136463 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136475 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136489 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136502 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136567 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136582 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136594 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136610 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136626 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136641 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136662 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.136674 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137516 4878 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137553 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137573 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137593 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137627 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137650 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137671 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137690 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137705 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137721 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137736 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137751 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137764 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137777 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137790 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137808 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137822 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137840 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137885 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137905 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137920 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137937 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137954 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137968 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.137985 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138003 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138017 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138033 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138049 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138064 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138081 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138097 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138115 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138134 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138150 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138166 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138181 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138200 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138216 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138229 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138242 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138258 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138274 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138288 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138301 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138315 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138329 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138339 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138349 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138363 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138374 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138387 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138398 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138415 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138429 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138441 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138460 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138473 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138487 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138500 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138516 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138530 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138547 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138561 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138577 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138592 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138605 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138618 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138632 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138648 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138661 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138674 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138686 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138700 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138712 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138722 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138734 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138745 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138757 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138769 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138783 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138797 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138810 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138823 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.138839 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139462 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139490 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139514 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139537 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139556 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139577 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139597 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139616 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139635 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139653 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139677 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139701 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139726 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139749 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139771 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139796 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139819 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139838 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139859 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139904 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139925 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139945 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139966 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.139987 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140009 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140033 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140052 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140071 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140104 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140124 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140148 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140172 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140192 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140212 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140234 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140266 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140288 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140308 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140330 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140353 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140376 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140396 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140419 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140441 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140464 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140485 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140508 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140529 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140547 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140568 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140588 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140609 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140630 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140648 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140669 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140691 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140710 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140731 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140750 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140769 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140792 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140813 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140832 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140853 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140897 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140918 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140940 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140963 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.140983 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141006 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141027 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141048 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141070 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141090 4878 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141111 4878 reconstruct.go:97] "Volume reconstruction finished" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.141126 4878 reconciler.go:26] "Reconciler: start to sync state" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.150350 4878 manager.go:324] Recovery completed Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.163421 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.165898 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.165953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.165969 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.166623 4878 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.166673 4878 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.166697 4878 state_mem.go:36] "Initialized new in-memory state store" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.176067 4878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.178206 4878 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.178309 4878 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.178357 4878 kubelet.go:2335] "Starting kubelet main sync loop" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.178443 4878 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.181463 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.181544 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.181831 4878 policy_none.go:49] "None policy: Start" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.185228 4878 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.185269 4878 state_mem.go:35] "Initializing new in-memory state store" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.221341 4878 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.242308 4878 manager.go:334] "Starting Device Plugin manager" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.242383 4878 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.242400 4878 server.go:79] "Starting device plugin registration server" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.243023 4878 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.243038 4878 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.243261 4878 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.243387 4878 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.243396 4878 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.251355 4878 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.278567 4878 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.278719 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.280474 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.280528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.280548 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.280730 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.281123 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.281211 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.281918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.281971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.281986 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282176 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282279 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282295 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282319 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282329 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.282332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283388 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283398 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283408 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283549 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283762 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.283832 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284334 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284363 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284377 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284596 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284762 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.284807 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285132 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285297 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285458 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285479 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285490 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285505 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.285529 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.286132 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.286152 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.286164 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.325113 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.342847 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.342923 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.342949 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.342971 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.342989 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343043 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343061 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343096 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343168 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343196 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343240 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343272 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343321 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343353 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343384 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.343422 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.344713 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.344760 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.344775 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.344834 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.345237 4878 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445172 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445281 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445360 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445382 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445399 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445543 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445572 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445596 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445596 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445613 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445632 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445693 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445715 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445733 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445752 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445767 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445814 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445847 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445853 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445894 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445920 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445945 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445972 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.445778 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446001 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446016 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446053 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446095 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446133 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.446157 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.546167 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.547452 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.547495 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.547508 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.547535 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.548272 4878 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.611691 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.635043 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.650411 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.650631 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ab933e85d7fb351a819a16319230d887a18af8c7e2e26092c172ea5b87a68bda WatchSource:0}: Error finding container ab933e85d7fb351a819a16319230d887a18af8c7e2e26092c172ea5b87a68bda: Status 404 returned error can't find the container with id ab933e85d7fb351a819a16319230d887a18af8c7e2e26092c172ea5b87a68bda Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.664065 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e2e12ea4d84d59ae6e8f677052e6a834f6e19febc7ecd5c4533738aedc020f7b WatchSource:0}: Error finding container e2e12ea4d84d59ae6e8f677052e6a834f6e19febc7ecd5c4533738aedc020f7b: Status 404 returned error can't find the container with id e2e12ea4d84d59ae6e8f677052e6a834f6e19febc7ecd5c4533738aedc020f7b Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.664648 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dc3ed72d0495fd4af4d6a7d65cf04dec10acc015d80503b297a9a40de58e9023 WatchSource:0}: Error finding container dc3ed72d0495fd4af4d6a7d65cf04dec10acc015d80503b297a9a40de58e9023: Status 404 returned error can't find the container with id dc3ed72d0495fd4af4d6a7d65cf04dec10acc015d80503b297a9a40de58e9023 Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.666550 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.670964 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.680330 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6466194c9a7927b5b780c023a3e333bca131e1f0c47603bff3b5b17f60143252 WatchSource:0}: Error finding container 6466194c9a7927b5b780c023a3e333bca131e1f0c47603bff3b5b17f60143252: Status 404 returned error can't find the container with id 6466194c9a7927b5b780c023a3e333bca131e1f0c47603bff3b5b17f60143252 Dec 04 15:35:57 crc kubenswrapper[4878]: W1204 15:35:57.694205 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-1bc92ec32a0841edb7b858f7bb801d0cd738123bd128ac77d8f02fce24dbc50e WatchSource:0}: Error finding container 1bc92ec32a0841edb7b858f7bb801d0cd738123bd128ac77d8f02fce24dbc50e: Status 404 returned error can't find the container with id 1bc92ec32a0841edb7b858f7bb801d0cd738123bd128ac77d8f02fce24dbc50e Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.726630 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.949183 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.950803 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.950855 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.950884 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:57 crc kubenswrapper[4878]: I1204 15:35:57.950922 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:35:57 crc kubenswrapper[4878]: E1204 15:35:57.951576 4878 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.119711 4878 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.185480 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e" exitCode=0 Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.185585 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.185698 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1bc92ec32a0841edb7b858f7bb801d0cd738123bd128ac77d8f02fce24dbc50e"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.185853 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.187397 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.187433 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.187445 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.188720 4878 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6" exitCode=0 Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.188782 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.188916 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6466194c9a7927b5b780c023a3e333bca131e1f0c47603bff3b5b17f60143252"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.189040 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.189930 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.190587 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.190613 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.190623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.191121 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.191145 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.191152 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192049 4878 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746" exitCode=0 Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192096 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192117 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dc3ed72d0495fd4af4d6a7d65cf04dec10acc015d80503b297a9a40de58e9023"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192166 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192763 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192786 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.192795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.194953 4878 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639" exitCode=0 Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.195068 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.195163 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e2e12ea4d84d59ae6e8f677052e6a834f6e19febc7ecd5c4533738aedc020f7b"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.195304 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.196569 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.196601 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.196619 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.196987 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376"} Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.197024 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ab933e85d7fb351a819a16319230d887a18af8c7e2e26092c172ea5b87a68bda"} Dec 04 15:35:58 crc kubenswrapper[4878]: W1204 15:35:58.214218 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.214322 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:58 crc kubenswrapper[4878]: W1204 15:35:58.291619 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.291707 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:58 crc kubenswrapper[4878]: W1204 15:35:58.424519 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.424665 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.785526 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:58 crc kubenswrapper[4878]: W1204 15:35:58.786154 4878 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.786276 4878 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.786208 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.787059 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.787118 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.787132 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:58 crc kubenswrapper[4878]: I1204 15:35:58.787167 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:35:58 crc kubenswrapper[4878]: E1204 15:35:58.787922 4878 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.094342 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 04 15:35:59 crc kubenswrapper[4878]: E1204 15:35:59.096888 4878 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.119916 4878 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.204465 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.204528 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.207386 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.207443 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.209787 4878 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03" exitCode=0 Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.209894 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.209932 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.211165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.211200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.211212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.212679 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"963a0243e0fc1bae361a187173501210b2d41a84bb276cfcec39a4f69935422c"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.212792 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.214136 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.214180 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.214194 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.216783 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461"} Dec 04 15:35:59 crc kubenswrapper[4878]: I1204 15:35:59.216842 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.224673 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.224764 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.224787 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.224822 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.226076 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.226117 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.226139 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.226902 4878 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7" exitCode=0 Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.227001 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.227110 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.228493 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.228538 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.228550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.229650 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.229684 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.230505 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.230557 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.230582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.233061 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95"} Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.233183 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.234517 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.234582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.234628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.388123 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.389482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.389520 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.389530 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:00 crc kubenswrapper[4878]: I1204 15:36:00.389553 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240645 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5418b30f6c66f72a1d99bc42e3e44d2c5eae369a8e24edb1dbab42d10f7dad5c"} Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240700 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240712 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"8e4f9215e74438731e57fa6f60900340bc1ea89257cc0fdf3b8480c8858fc4e1"} Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240731 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3370c6be8d898bbe818ee571c5c413010c6934e2c04d2d1701fc8067cfd4b25f"} Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240744 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88d1cf227c41856e35ed5433f312c767cf4257aca2189bf3a2a00300b795ea3a"} Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240752 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240772 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.240677 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.241954 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243771 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243829 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243853 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243894 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.243953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.246740 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.246787 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.246802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.595520 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.603767 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:01 crc kubenswrapper[4878]: I1204 15:36:01.724487 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.253223 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.253304 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.253285 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bdd7e072b3a4e2b73e6d25c66598ba414fb27c262e40e4af238fc79d9cac3999"} Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254410 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254437 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254448 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:02 crc kubenswrapper[4878]: I1204 15:36:02.254605 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.233015 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.256549 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.256654 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.257413 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.258345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.259086 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.259109 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.259005 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.259247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.259282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:03 crc kubenswrapper[4878]: I1204 15:36:03.393185 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.260116 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.261419 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.261472 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.261489 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.406711 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.407057 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.409098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.409159 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.409170 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.531654 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.531994 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.533824 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.533924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.533956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.639802 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.843584 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:04 crc kubenswrapper[4878]: I1204 15:36:04.889587 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.261894 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.261997 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.262851 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.262910 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.262920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.263028 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.263060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:05 crc kubenswrapper[4878]: I1204 15:36:05.263073 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.264773 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.265923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.265955 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.265963 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.917232 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.917478 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.918912 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.918954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.918967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.943599 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.943815 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.944963 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.945002 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:06 crc kubenswrapper[4878]: I1204 15:36:06.945013 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:07 crc kubenswrapper[4878]: E1204 15:36:07.251501 4878 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 04 15:36:07 crc kubenswrapper[4878]: I1204 15:36:07.844136 4878 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 04 15:36:07 crc kubenswrapper[4878]: I1204 15:36:07.844267 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:36:09 crc kubenswrapper[4878]: I1204 15:36:09.887068 4878 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 04 15:36:09 crc kubenswrapper[4878]: I1204 15:36:09.887144 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 15:36:09 crc kubenswrapper[4878]: I1204 15:36:09.896360 4878 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 04 15:36:09 crc kubenswrapper[4878]: I1204 15:36:09.896459 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.425437 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.425667 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.426944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.426980 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.426991 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:13 crc kubenswrapper[4878]: I1204 15:36:13.437704 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.285682 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.286694 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.286743 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.286759 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.412013 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.412253 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.413944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.414042 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.414070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.532555 4878 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.532635 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.649825 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.650185 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.650985 4878 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.651105 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.652195 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.652282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.652297 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.658367 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:14 crc kubenswrapper[4878]: E1204 15:36:14.887527 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895157 4878 trace.go:236] Trace[654699966]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 15:36:00.517) (total time: 14377ms): Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[654699966]: ---"Objects listed" error: 14377ms (15:36:14.895) Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[654699966]: [14.377846458s] [14.377846458s] END Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895486 4878 trace.go:236] Trace[12116806]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 15:36:00.818) (total time: 14076ms): Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[12116806]: ---"Objects listed" error: 14076ms (15:36:14.895) Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[12116806]: [14.076536202s] [14.076536202s] END Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895495 4878 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895506 4878 trace.go:236] Trace[1078278456]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 15:36:01.306) (total time: 13588ms): Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[1078278456]: ---"Objects listed" error: 13587ms (15:36:14.894) Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[1078278456]: [13.58852366s] [13.58852366s] END Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895859 4878 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.895579 4878 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.897633 4878 trace.go:236] Trace[1831813434]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Dec-2025 15:36:01.064) (total time: 13830ms): Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[1831813434]: ---"Objects listed" error: 13829ms (15:36:14.894) Dec 04 15:36:14 crc kubenswrapper[4878]: Trace[1831813434]: [13.830043313s] [13.830043313s] END Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.897664 4878 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 15:36:14 crc kubenswrapper[4878]: E1204 15:36:14.897996 4878 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.899676 4878 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.903287 4878 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 04 15:36:14 crc kubenswrapper[4878]: I1204 15:36:14.936710 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.121904 4878 apiserver.go:52] "Watching apiserver" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.125279 4878 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.125644 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126115 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126133 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.126491 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126256 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126566 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126802 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.126930 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.126136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.127042 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.129446 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.130812 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.131207 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.131454 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.131663 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.132177 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.135328 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.135629 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.136594 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.172396 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.193769 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.215460 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.221237 4878 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.233300 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.247239 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.259225 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.275654 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.290745 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.294258 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.301068 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.301142 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302092 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302135 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302173 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302201 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302227 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302252 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302286 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302308 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302330 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302356 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302380 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302409 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302433 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302460 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302485 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302484 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302508 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302586 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302616 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302641 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302661 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302722 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303158 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303211 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303240 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303268 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303316 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303344 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303384 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303408 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303427 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303498 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303535 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303554 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303580 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303618 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303643 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303665 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303699 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303720 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303759 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303778 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303798 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303819 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303856 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302572 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303898 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302842 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.302853 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303929 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303910 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303956 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303074 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304001 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303131 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304024 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304046 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304083 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304100 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304123 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304166 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304199 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304276 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304303 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304354 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304374 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304394 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304427 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304488 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304508 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304523 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304547 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304582 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304600 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304618 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304649 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304668 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304691 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304725 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304745 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304764 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304783 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304815 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304834 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304860 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304907 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304927 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304948 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304985 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305001 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305019 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305145 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305220 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305240 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305256 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305293 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305314 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305332 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305349 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305382 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305401 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305419 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305457 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305473 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305493 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305527 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305546 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305563 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305583 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305617 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305635 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305658 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305695 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305716 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305733 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305776 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305798 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308255 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308286 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308312 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308336 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308357 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308376 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308395 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308802 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308897 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308921 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308939 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308960 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308978 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308997 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309019 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309052 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309082 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309106 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309130 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309153 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309175 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309200 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309222 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309245 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309267 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309288 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309397 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309423 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309477 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309504 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309530 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309658 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309685 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309727 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309750 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310436 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303219 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303357 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303584 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303639 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310479 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311075 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303795 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.303888 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304045 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304163 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304253 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304284 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304313 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304333 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304550 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304547 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304751 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304815 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304861 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.304962 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305042 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305076 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305095 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305249 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305265 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305279 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305288 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305472 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305508 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305618 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.305731 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.306004 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.306616 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.306785 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.307105 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.307143 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.307937 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.307967 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308375 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308571 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308852 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.308933 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309131 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309199 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309394 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309433 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.309644 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310041 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310066 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310217 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310241 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310262 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310285 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.310319 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311105 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311607 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311635 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311657 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311677 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311694 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311723 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311748 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311749 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311769 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.311903 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312131 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312173 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312199 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312367 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312390 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312399 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312481 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312541 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312561 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312573 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312738 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312751 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312775 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312922 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312948 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.312968 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313002 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313111 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313219 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313416 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313432 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313458 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313497 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313584 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313596 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313603 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313612 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313645 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.313712 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314163 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314169 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314221 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314249 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314270 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314968 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315024 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315051 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315152 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315905 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315937 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315963 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315985 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316003 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316023 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316042 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316064 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316087 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316107 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316125 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316143 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316161 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316182 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316202 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316220 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316244 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316295 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316320 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316342 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316364 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316389 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316441 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316469 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316497 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316521 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316567 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316945 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.317805 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318189 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318223 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318252 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318347 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318364 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318378 4878 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318390 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318406 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318419 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318433 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318445 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318457 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318468 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318480 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318492 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318502 4878 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318512 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318523 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318533 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318543 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318553 4878 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318565 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318576 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318588 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318598 4878 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318609 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318620 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318732 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318751 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318767 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318782 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318796 4878 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318809 4878 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318821 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318837 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318851 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320142 4878 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320154 4878 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320165 4878 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320176 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320188 4878 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320200 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320210 4878 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320222 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320232 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320242 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320253 4878 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320262 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320272 4878 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320285 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320295 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320306 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320315 4878 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320325 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320334 4878 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320344 4878 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320354 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320366 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320377 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320387 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320399 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320415 4878 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320426 4878 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320436 4878 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320446 4878 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320456 4878 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320466 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320477 4878 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320488 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320497 4878 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320507 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320517 4878 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320526 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320536 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320545 4878 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320555 4878 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320565 4878 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320575 4878 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320586 4878 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320596 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320606 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314544 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314653 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314680 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.320843 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.321355 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.321536 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.321571 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.321575 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315209 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315258 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315541 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315566 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315773 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315829 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315859 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315908 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.321765 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.315929 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316155 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316249 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316599 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316627 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316660 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316679 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316703 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316805 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.316909 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.317152 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.317716 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318131 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318183 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318262 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318301 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318328 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.318344 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.319174 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.319229 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.319263 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.319527 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.319540 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.322685 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.314860 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323203 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.323215 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:15.823159161 +0000 UTC m=+19.785696117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323184 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323440 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323475 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323829 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.323908 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.324011 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.324272 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.324416 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.324520 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.324843 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.325394 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326134 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.325491 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.325631 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326014 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326279 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326424 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326573 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326622 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326775 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.326938 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.327234 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.327263 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.327476 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.327793 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.328104 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.328243 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:15.82821159 +0000 UTC m=+19.790748546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.328295 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.328683 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.329088 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.329408 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.329958 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.330149 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.330188 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.330284 4878 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331046 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.330958 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331394 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331691 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331757 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331769 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331852 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.331901 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.332048 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.332256 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:15.832223683 +0000 UTC m=+19.794760639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.332287 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.332357 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.332750 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.333432 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.338073 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.338699 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.341482 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.345688 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.345675 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.346419 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.346541 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.346578 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.346630 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.346836 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:15.846796227 +0000 UTC m=+19.809333183 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.346478 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.348999 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.349221 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.350479 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.352269 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.352328 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.352360 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.352452 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:15.852420361 +0000 UTC m=+19.814957317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.352472 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.352858 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.352924 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.354484 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.354503 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.354522 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.355381 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.354838 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.355158 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.352397 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.355271 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.356233 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.358244 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.358762 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.359315 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.359800 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.360116 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.360802 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.361269 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.362952 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.364512 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.365436 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.372178 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.372809 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.376767 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.365061 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.380078 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.380548 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.394403 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.400341 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.405535 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.407915 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.422577 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.422886 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423051 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423142 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423296 4878 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423382 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423551 4878 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423636 4878 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423720 4878 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423796 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423898 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.423995 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424068 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424150 4878 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424229 4878 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424304 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424378 4878 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424474 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424554 4878 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424632 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424699 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424759 4878 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424814 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424889 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.424977 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425045 4878 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425101 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425164 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425247 4878 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425326 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425398 4878 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425480 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.425580 4878 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.426965 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427246 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427342 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427410 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427470 4878 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427537 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427614 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427714 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427800 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427911 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427999 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428080 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428160 4878 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428239 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428310 4878 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428392 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428472 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428554 4878 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428641 4878 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428721 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428793 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428901 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.428994 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.429077 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.429152 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.429229 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.429314 4878 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.427813 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.429396 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430607 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430627 4878 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430644 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430658 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430671 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430682 4878 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430694 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430706 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430719 4878 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430735 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430750 4878 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430762 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430773 4878 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430784 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430797 4878 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430809 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430822 4878 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430834 4878 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430846 4878 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430857 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430890 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430906 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430922 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430934 4878 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430946 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430958 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430970 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430982 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.430995 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431010 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431022 4878 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431035 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431048 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431059 4878 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431070 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431081 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431093 4878 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431105 4878 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431116 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431131 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431143 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431154 4878 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431166 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431179 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431190 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431202 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431214 4878 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431226 4878 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431237 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431253 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431263 4878 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431274 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431286 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431297 4878 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431309 4878 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431321 4878 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431333 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431347 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.431358 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.441635 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.447731 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.450451 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.459411 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 04 15:36:15 crc kubenswrapper[4878]: W1204 15:36:15.460487 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-a7c47895ac3ebeca9b06fe6b63fc424d36e0d6477f1e5792d18eeb6e18867928 WatchSource:0}: Error finding container a7c47895ac3ebeca9b06fe6b63fc424d36e0d6477f1e5792d18eeb6e18867928: Status 404 returned error can't find the container with id a7c47895ac3ebeca9b06fe6b63fc424d36e0d6477f1e5792d18eeb6e18867928 Dec 04 15:36:15 crc kubenswrapper[4878]: W1204 15:36:15.461056 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-acfa71246bbfecbbe52e77749676ed519538fabd74c2e0069eb5913827cbb738 WatchSource:0}: Error finding container acfa71246bbfecbbe52e77749676ed519538fabd74c2e0069eb5913827cbb738: Status 404 returned error can't find the container with id acfa71246bbfecbbe52e77749676ed519538fabd74c2e0069eb5913827cbb738 Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.462167 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.482885 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.506269 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.533261 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.552250 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.571565 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.584197 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.836013 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.836150 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.836200 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.836260 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:16.836218436 +0000 UTC m=+20.798755402 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.836275 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.836362 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:16.836353259 +0000 UTC m=+20.798890215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.836424 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.836598 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:16.836558424 +0000 UTC m=+20.799095420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.936793 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:15 crc kubenswrapper[4878]: I1204 15:36:15.936841 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937038 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937058 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937071 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937146 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:16.937115413 +0000 UTC m=+20.899652359 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937156 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937227 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937252 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:15 crc kubenswrapper[4878]: E1204 15:36:15.937380 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:16.937343129 +0000 UTC m=+20.899880145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.178603 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.178732 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.293212 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"003b23eca3899014e78b10cf2dbb078ce8f56b723bf22f7b5a4d38c97c2658f1"} Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.295010 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717"} Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.295092 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914"} Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.295112 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"acfa71246bbfecbbe52e77749676ed519538fabd74c2e0069eb5913827cbb738"} Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.296292 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64"} Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.296327 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a7c47895ac3ebeca9b06fe6b63fc424d36e0d6477f1e5792d18eeb6e18867928"} Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.303736 4878 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.314669 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.329963 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.343555 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.361029 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.377173 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.393086 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.406246 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.416834 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.430049 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.444072 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.458164 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.472081 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.485806 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.498196 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.510421 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.523396 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:16Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.845479 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.845567 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.845604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.845725 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.845757 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:18.84571663 +0000 UTC m=+22.808253586 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.845797 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:18.845788081 +0000 UTC m=+22.808325127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.845844 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.845962 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:18.845905084 +0000 UTC m=+22.808442040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.947090 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:16 crc kubenswrapper[4878]: I1204 15:36:16.947144 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947271 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947306 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947322 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947388 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:18.947368546 +0000 UTC m=+22.909905502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947270 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947414 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947426 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:16 crc kubenswrapper[4878]: E1204 15:36:16.947482 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:18.947447458 +0000 UTC m=+22.909984414 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.178992 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.179000 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:17 crc kubenswrapper[4878]: E1204 15:36:17.179207 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:17 crc kubenswrapper[4878]: E1204 15:36:17.179406 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.183170 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.183971 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.184744 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.185503 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.187044 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.187661 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.188731 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.189943 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.191106 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.191696 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.192254 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.193415 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.194134 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.195219 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.195830 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.198008 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.199822 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.200360 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.204791 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.205670 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.206193 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.207642 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.208153 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.208838 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.209278 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.209918 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.210744 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.211299 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.212001 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.212478 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.212976 4878 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.213079 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.214505 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.215075 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.215516 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.216646 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.216667 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.220657 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.221258 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.222965 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.223951 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.224848 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.225457 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.226562 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.227669 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.228170 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.229357 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.229913 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.231012 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.231526 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.232024 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.232850 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.233480 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.233695 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.234425 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.234898 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.249341 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.270597 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.285312 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.299077 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.320059 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:17 crc kubenswrapper[4878]: I1204 15:36:17.335656 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.098336 4878 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.100752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.100802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.101210 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.101398 4878 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.115668 4878 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.116139 4878 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.117676 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.117721 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.117734 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.117754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.117769 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.142529 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.148987 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.149056 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.149066 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.149086 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.149101 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.163404 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.168534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.168609 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.168628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.168653 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.168672 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.178910 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.179086 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.185381 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.189226 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.189264 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.189275 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.189292 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.189303 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.207306 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.211672 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.211726 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.211779 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.211796 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.211806 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.225128 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.225255 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.227140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.227183 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.227194 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.227209 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.227220 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.302764 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.316839 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.329587 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.329641 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.329657 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.329678 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.329692 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.333235 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.350210 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.365338 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.380699 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.413230 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.426219 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.432995 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.433042 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.433054 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.433074 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.433085 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.441610 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:18Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.536012 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.536059 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.536073 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.536093 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.536105 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.638549 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.638589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.638598 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.638615 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.638625 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.741410 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.741465 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.741478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.741499 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.741511 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.844129 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.844229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.844274 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.844302 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.844322 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.865224 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.865428 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.865467 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.865428734 +0000 UTC m=+26.827965700 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.865554 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.865620 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.865726 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.865698561 +0000 UTC m=+26.828235687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.865745 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.865891 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.865841945 +0000 UTC m=+26.828378901 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.947328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.947387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.947406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.947429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.947443 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:18Z","lastTransitionTime":"2025-12-04T15:36:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.967059 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:18 crc kubenswrapper[4878]: I1204 15:36:18.967111 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967297 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967317 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967330 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967348 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967390 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967396 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.967375508 +0000 UTC m=+26.929912464 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967404 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:18 crc kubenswrapper[4878]: E1204 15:36:18.967477 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.96745217 +0000 UTC m=+26.929989196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.050534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.050620 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.050632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.050652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.050666 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.153374 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.153437 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.153450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.153471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.153483 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.179131 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.179152 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:19 crc kubenswrapper[4878]: E1204 15:36:19.179323 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:19 crc kubenswrapper[4878]: E1204 15:36:19.179528 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.255972 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.256033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.256046 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.256074 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.256088 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.359325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.359397 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.359415 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.359438 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.359450 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.462251 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.462303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.462314 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.462338 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.462353 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.565813 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.565883 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.565894 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.565910 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.565921 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.589050 4878 csr.go:261] certificate signing request csr-qqwgx is approved, waiting to be issued Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.596581 4878 csr.go:257] certificate signing request csr-qqwgx is issued Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.668695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.668742 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.668751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.668769 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.668782 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.771337 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.771378 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.771390 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.771409 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.771424 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.874516 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.874563 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.874575 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.874590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.874603 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.977225 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.977281 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.977291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.977309 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.977324 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:19Z","lastTransitionTime":"2025-12-04T15:36:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.996531 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5bgh4"] Dec 04 15:36:19 crc kubenswrapper[4878]: I1204 15:36:19.996902 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.000177 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.000202 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.000253 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.021120 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.043078 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.059816 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.074728 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.077014 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtz4g\" (UniqueName: \"kubernetes.io/projected/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-kube-api-access-gtz4g\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.077056 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-hosts-file\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.082183 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.082229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.082241 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.082259 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.082270 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.093841 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.108036 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.120635 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.130393 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.144300 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.178475 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtz4g\" (UniqueName: \"kubernetes.io/projected/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-kube-api-access-gtz4g\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.178535 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-hosts-file\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.178626 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-hosts-file\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.178661 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.178805 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.185181 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.185225 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.185238 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.185258 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.185278 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.197824 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtz4g\" (UniqueName: \"kubernetes.io/projected/ea88ea7e-f678-42eb-9a92-ccc0a32f096e-kube-api-access-gtz4g\") pod \"node-resolver-5bgh4\" (UID: \"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\") " pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.287341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.287395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.287406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.287429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.287442 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.310683 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5bgh4" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.322055 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea88ea7e_f678_42eb_9a92_ccc0a32f096e.slice/crio-b448630b1d4f009af46ab9d4db1361ecfdf15a3cb552e02675275142ae4f34c5 WatchSource:0}: Error finding container b448630b1d4f009af46ab9d4db1361ecfdf15a3cb552e02675275142ae4f34c5: Status 404 returned error can't find the container with id b448630b1d4f009af46ab9d4db1361ecfdf15a3cb552e02675275142ae4f34c5 Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.389395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.389733 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.389745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.389762 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.389772 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.405133 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9p8p7"] Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.405505 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.407564 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.407617 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.408154 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.408511 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.410371 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.415539 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-xrwqw"] Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.415939 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xrkl9"] Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.416485 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.416738 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.418949 4878 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419016 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.419561 4878 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.419574 4878 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419594 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419643 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.419777 4878 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419809 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.419892 4878 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419913 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.419939 4878 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.419993 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.420069 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.420694 4878 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Dec 04 15:36:20 crc kubenswrapper[4878]: E1204 15:36:20.420788 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.446119 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.460384 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.472974 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482294 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-k8s-cni-cncf-io\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482338 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-netns\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482363 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-system-cni-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482401 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-socket-dir-parent\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482428 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-multus\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482535 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-daemon-config\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482641 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-kubelet\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482700 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482722 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cni-binary-copy\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482738 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkt5z\" (UniqueName: \"kubernetes.io/projected/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-kube-api-access-lkt5z\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482759 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482778 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cnibin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482794 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-hostroot\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482810 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-conf-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.482831 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-mcd-auth-proxy-config\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483714 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-os-release\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483780 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-bin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483806 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-multus-certs\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483834 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-os-release\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483855 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4hlq\" (UniqueName: \"kubernetes.io/projected/e694bb65-ccd1-4e85-921a-607943be54b2-kube-api-access-q4hlq\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483897 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-proxy-tls\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483931 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-system-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.483996 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-etc-kubernetes\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.484017 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.484051 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmlq\" (UniqueName: \"kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.484078 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.484118 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-rootfs\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.484151 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-cnibin\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.485977 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.499889 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.501325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.501379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.501390 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.501407 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.501418 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.517056 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.547925 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585670 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-kubelet\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585738 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585765 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cni-binary-copy\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585774 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-kubelet\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585789 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkt5z\" (UniqueName: \"kubernetes.io/projected/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-kube-api-access-lkt5z\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585810 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cnibin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585835 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-hostroot\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585856 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-conf-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585903 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585928 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-mcd-auth-proxy-config\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585958 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-os-release\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.585982 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-bin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586003 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-multus-certs\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586024 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-os-release\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586047 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4hlq\" (UniqueName: \"kubernetes.io/projected/e694bb65-ccd1-4e85-921a-607943be54b2-kube-api-access-q4hlq\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586080 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-proxy-tls\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586102 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-system-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586126 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-etc-kubernetes\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586146 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586175 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmlq\" (UniqueName: \"kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586197 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-rootfs\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586219 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-cnibin\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586250 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586271 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-k8s-cni-cncf-io\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586291 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-netns\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586312 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-system-cni-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586344 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-socket-dir-parent\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586366 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-daemon-config\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586387 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-multus\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586478 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586484 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-multus\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586522 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cnibin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586556 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-system-cni-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586579 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-etc-kubernetes\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586594 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-hostroot\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586631 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-conf-dir\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586731 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-cni-binary-copy\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586824 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-multus-certs\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586917 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-os-release\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.586945 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-var-lib-cni-bin\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587022 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-k8s-cni-cncf-io\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587120 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-rootfs\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587162 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-cnibin\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587159 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-host-run-netns\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587200 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-socket-dir-parent\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587225 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-system-cni-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587323 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-os-release\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587393 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-binary-copy\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587628 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e694bb65-ccd1-4e85-921a-607943be54b2-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.587700 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-multus-daemon-config\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.591968 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.598135 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-04 15:31:19 +0000 UTC, rotation deadline is 2026-09-16 15:35:17.468245474 +0000 UTC Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.598223 4878 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6863h58m56.87002479s for next certificate rotation Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612059 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkt5z\" (UniqueName: \"kubernetes.io/projected/c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757-kube-api-access-lkt5z\") pod \"multus-9p8p7\" (UID: \"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\") " pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612568 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612626 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612641 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.612675 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.614688 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.615154 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4hlq\" (UniqueName: \"kubernetes.io/projected/e694bb65-ccd1-4e85-921a-607943be54b2-kube-api-access-q4hlq\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.651863 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.671682 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.710953 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.715434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.715488 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.715501 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.715523 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.715536 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.720923 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9p8p7" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.729618 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: W1204 15:36:20.736131 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13e3dc9_eb06_42c3_98c3_ce6c5ccd4757.slice/crio-e5e7b3a330560e42d3095c5d52efbd5d27b8b55c4dc629caee2c3d6fcb4e99ce WatchSource:0}: Error finding container e5e7b3a330560e42d3095c5d52efbd5d27b8b55c4dc629caee2c3d6fcb4e99ce: Status 404 returned error can't find the container with id e5e7b3a330560e42d3095c5d52efbd5d27b8b55c4dc629caee2c3d6fcb4e99ce Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.753415 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.768897 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.781740 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzptn"] Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.782662 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.787507 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.787507 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.788613 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.788662 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.788915 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.788953 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.788971 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.792718 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.808815 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.818629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.818669 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.818680 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.818695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.818707 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.821868 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.834796 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.849043 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.861473 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.874613 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.886697 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.889950 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.889980 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890005 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890020 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890042 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890150 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890195 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890228 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890272 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nxwl\" (UniqueName: \"kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890332 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890377 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890423 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890454 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890488 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890515 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890561 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890595 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890648 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890684 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.890736 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.898464 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.920954 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.921900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.921941 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.921951 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.921967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.921976 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:20Z","lastTransitionTime":"2025-12-04T15:36:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.933818 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.946031 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.959785 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.972545 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.984059 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.991953 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992004 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992028 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992053 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992080 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992104 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992130 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992148 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992167 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992184 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nxwl\" (UniqueName: \"kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992186 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992213 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992207 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992231 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992268 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992363 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992427 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992432 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992471 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992527 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992689 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992706 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992730 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992796 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992889 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992895 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992931 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.992995 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993031 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993064 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993085 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993120 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993169 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993174 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993190 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993452 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993445 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.993590 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.996092 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:20 crc kubenswrapper[4878]: I1204 15:36:20.996474 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.008702 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.010339 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nxwl\" (UniqueName: \"kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl\") pod \"ovnkube-node-qzptn\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.022811 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.024563 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.024594 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.024604 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.024619 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.024660 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.035951 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.094184 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.127636 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.127693 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.127707 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.127729 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.127743 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: W1204 15:36:21.130264 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b6e8498_be44_4b9c_9dd3_dc08f9515f2e.slice/crio-92827045b9819297e4d561b980b595eeec4c764e7c27bcf1c8abcfe798d4544a WatchSource:0}: Error finding container 92827045b9819297e4d561b980b595eeec4c764e7c27bcf1c8abcfe798d4544a: Status 404 returned error can't find the container with id 92827045b9819297e4d561b980b595eeec4c764e7c27bcf1c8abcfe798d4544a Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.178855 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.178958 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:21 crc kubenswrapper[4878]: E1204 15:36:21.179023 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:21 crc kubenswrapper[4878]: E1204 15:36:21.179126 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.230373 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.230419 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.230433 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.230454 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.230472 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.312542 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"92827045b9819297e4d561b980b595eeec4c764e7c27bcf1c8abcfe798d4544a"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.314037 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerStarted","Data":"b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.314078 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerStarted","Data":"e5e7b3a330560e42d3095c5d52efbd5d27b8b55c4dc629caee2c3d6fcb4e99ce"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.315675 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5bgh4" event={"ID":"ea88ea7e-f678-42eb-9a92-ccc0a32f096e","Type":"ContainerStarted","Data":"ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.315735 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5bgh4" event={"ID":"ea88ea7e-f678-42eb-9a92-ccc0a32f096e","Type":"ContainerStarted","Data":"b448630b1d4f009af46ab9d4db1361ecfdf15a3cb552e02675275142ae4f34c5"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.331581 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.333226 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.333282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.333299 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.333321 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.333337 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.346458 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.349183 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.360580 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.373211 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.377965 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.386600 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.390831 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-proxy-tls\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.400387 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.412197 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.424962 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.435326 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.435391 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.435400 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.435420 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.435437 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.437541 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.451372 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.463481 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.476656 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.488128 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.497760 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-mcd-auth-proxy-config\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.498921 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.503045 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.508267 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e694bb65-ccd1-4e85-921a-607943be54b2-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xrkl9\" (UID: \"e694bb65-ccd1-4e85-921a-607943be54b2\") " pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.515763 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.529197 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.538049 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.538085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.538094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.538112 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.538122 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.546158 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.559503 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.572759 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.585859 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.598592 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: E1204 15:36:21.608640 4878 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 04 15:36:21 crc kubenswrapper[4878]: E1204 15:36:21.608775 4878 projected.go:194] Error preparing data for projected volume kube-api-access-nkmlq for pod openshift-machine-config-operator/machine-config-daemon-xrwqw: failed to sync configmap cache: timed out waiting for the condition Dec 04 15:36:21 crc kubenswrapper[4878]: E1204 15:36:21.608921 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq podName:a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:22.108830315 +0000 UTC m=+26.071367271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nkmlq" (UniqueName: "kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq") pod "machine-config-daemon-xrwqw" (UID: "a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92") : failed to sync configmap cache: timed out waiting for the condition Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.613734 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.624432 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.635845 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640477 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640866 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640894 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640916 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.640929 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.647685 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.657177 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.672760 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.743673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.743720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.743731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.743751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.743763 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.845082 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.846034 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.846081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.846094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.846119 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.846133 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.853403 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" Dec 04 15:36:21 crc kubenswrapper[4878]: W1204 15:36:21.866022 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode694bb65_ccd1_4e85_921a_607943be54b2.slice/crio-4024e217cc02ffe4c3cdf1b246432a3e5eba6666e06f73093c699f3b94255f66 WatchSource:0}: Error finding container 4024e217cc02ffe4c3cdf1b246432a3e5eba6666e06f73093c699f3b94255f66: Status 404 returned error can't find the container with id 4024e217cc02ffe4c3cdf1b246432a3e5eba6666e06f73093c699f3b94255f66 Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.900944 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6rrvz"] Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.901390 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.904430 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.904828 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.905059 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.905099 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.930294 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.945203 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.949347 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.949519 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.949585 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.949651 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.949725 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:21Z","lastTransitionTime":"2025-12-04T15:36:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.959473 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.971296 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.972851 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.985766 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:21 crc kubenswrapper[4878]: I1204 15:36:21.999594 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.003550 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgkw6\" (UniqueName: \"kubernetes.io/projected/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-kube-api-access-sgkw6\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.003588 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-host\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.003626 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-serviceca\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.012311 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.026303 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.040871 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.052518 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.053243 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.053347 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.053407 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.053470 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.053853 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.068034 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.081689 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.093133 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.104273 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-serviceca\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.104360 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgkw6\" (UniqueName: \"kubernetes.io/projected/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-kube-api-access-sgkw6\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.104383 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-host\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.104439 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-host\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.105333 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-serviceca\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.133570 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.151799 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgkw6\" (UniqueName: \"kubernetes.io/projected/253bac41-fb3d-4fa1-8586-30fb4b47ea9a-kube-api-access-sgkw6\") pod \"node-ca-6rrvz\" (UID: \"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\") " pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.159137 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.159177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.159186 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.159203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.159213 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.179544 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.179686 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.205098 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmlq\" (UniqueName: \"kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.207995 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmlq\" (UniqueName: \"kubernetes.io/projected/a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92-kube-api-access-nkmlq\") pod \"machine-config-daemon-xrwqw\" (UID: \"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\") " pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.213818 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6rrvz" Dec 04 15:36:22 crc kubenswrapper[4878]: W1204 15:36:22.227426 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod253bac41_fb3d_4fa1_8586_30fb4b47ea9a.slice/crio-17f3dc924c1b3de20d9e11e9665dde508875f566f9a4cae695673520e52886a6 WatchSource:0}: Error finding container 17f3dc924c1b3de20d9e11e9665dde508875f566f9a4cae695673520e52886a6: Status 404 returned error can't find the container with id 17f3dc924c1b3de20d9e11e9665dde508875f566f9a4cae695673520e52886a6 Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.233154 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.262395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.262439 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.262450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.262469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.262494 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: W1204 15:36:22.287283 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a2bf0a_1c17_4fc6_af13_ee239dfc6a92.slice/crio-42d7266c34fb3d533fd640ea7f6b4012e380cfca46b3369c6f78dbe8b6a60115 WatchSource:0}: Error finding container 42d7266c34fb3d533fd640ea7f6b4012e380cfca46b3369c6f78dbe8b6a60115: Status 404 returned error can't find the container with id 42d7266c34fb3d533fd640ea7f6b4012e380cfca46b3369c6f78dbe8b6a60115 Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.320308 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"42d7266c34fb3d533fd640ea7f6b4012e380cfca46b3369c6f78dbe8b6a60115"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.326931 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" exitCode=0 Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.327036 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.328479 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rrvz" event={"ID":"253bac41-fb3d-4fa1-8586-30fb4b47ea9a","Type":"ContainerStarted","Data":"17f3dc924c1b3de20d9e11e9665dde508875f566f9a4cae695673520e52886a6"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.330009 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerStarted","Data":"c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.330042 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerStarted","Data":"4024e217cc02ffe4c3cdf1b246432a3e5eba6666e06f73093c699f3b94255f66"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.339832 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.361259 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.366447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.366494 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.366505 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.366523 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.366535 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.376340 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.391819 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.407795 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.422474 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.436658 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.469857 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.469926 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.469940 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.469960 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.469977 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.475501 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.511403 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.550472 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.572247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.572293 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.572302 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.572321 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.572332 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.587977 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.630149 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.674751 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.675655 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.675690 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.675741 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.675762 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.675777 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.708361 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.752664 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.778680 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.778719 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.778734 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.778754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.778768 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.790297 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.826964 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.866770 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.881318 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.881362 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.881376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.881394 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.881404 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.908891 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.912297 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.912420 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:30.912392889 +0000 UTC m=+34.874929845 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.913309 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.913394 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.913526 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.913596 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:30.913579809 +0000 UTC m=+34.876116845 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.913526 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:22 crc kubenswrapper[4878]: E1204 15:36:22.913673 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:30.913659501 +0000 UTC m=+34.876196457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.949169 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.984782 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.984828 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.984838 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.984859 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.984886 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:22Z","lastTransitionTime":"2025-12-04T15:36:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:22 crc kubenswrapper[4878]: I1204 15:36:22.990195 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.014342 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.014454 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014536 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014569 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014585 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014659 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:31.01464198 +0000 UTC m=+34.977178936 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014653 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014689 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014698 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.014720 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:31.014714882 +0000 UTC m=+34.977251838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.028660 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.068292 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.087160 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.087213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.087221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.087239 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.087249 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.107114 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.150904 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.178607 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.178754 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.178814 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:23 crc kubenswrapper[4878]: E1204 15:36:23.178980 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.188991 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.189036 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.189049 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.189068 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.189079 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.191765 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.229975 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.270193 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.291250 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.291335 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.291346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.291375 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.291388 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.336922 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.336984 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.341420 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.341476 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.341489 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.343073 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6rrvz" event={"ID":"253bac41-fb3d-4fa1-8586-30fb4b47ea9a","Type":"ContainerStarted","Data":"cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.345656 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8" exitCode=0 Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.345691 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.353361 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.367121 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.387943 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.393612 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.393662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.393675 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.393698 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.393712 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.428937 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.469643 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.497143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.497188 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.497202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.497224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.497239 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.509407 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.550289 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.588487 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.599509 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.599553 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.599563 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.599580 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.599591 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.625767 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.676531 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.708447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.708488 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.708498 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.708516 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.708529 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.714093 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.767288 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.788352 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.811194 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.811246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.811262 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.811285 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.811299 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.828501 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.870490 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.907406 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.914138 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.914173 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.914186 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.914205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.914216 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:23Z","lastTransitionTime":"2025-12-04T15:36:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.948929 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:23 crc kubenswrapper[4878]: I1204 15:36:23.988547 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:23Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.016958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.017004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.017015 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.017033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.017046 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.026409 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.070047 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.106423 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.119477 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.119519 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.119530 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.119550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.119563 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.147675 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.178716 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:24 crc kubenswrapper[4878]: E1204 15:36:24.178934 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.188027 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.222450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.222504 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.222523 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.222546 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.222560 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.227458 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.267604 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.317568 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.325045 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.325085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.325104 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.325123 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.325134 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.350998 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.353611 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.353656 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.353672 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.355706 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63" exitCode=0 Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.355734 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.394262 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.428411 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.429350 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.429413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.429428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.429451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.429467 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.466221 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.506270 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.531941 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.531990 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.532002 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.532017 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.532027 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.563060 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.588295 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.630213 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.633849 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.633896 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.633905 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.633924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.633935 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.671942 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.708789 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.736849 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.736908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.736923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.736947 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.736960 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.751331 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.789750 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.830977 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.839704 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.839759 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.839772 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.839794 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.839806 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.870311 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.907832 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.942330 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.942379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.942389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.942406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.942418 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:24Z","lastTransitionTime":"2025-12-04T15:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:24 crc kubenswrapper[4878]: I1204 15:36:24.947144 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:24Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.045268 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.045319 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.045332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.045351 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.045363 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.147810 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.147856 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.147867 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.147908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.147920 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.178932 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.179001 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:25 crc kubenswrapper[4878]: E1204 15:36:25.179103 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:25 crc kubenswrapper[4878]: E1204 15:36:25.179245 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.250956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.251001 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.251011 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.251027 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.251037 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.353670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.353711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.353721 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.353739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.353750 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.360921 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0" exitCode=0 Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.360973 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.376107 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.389529 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.403314 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.429739 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.447757 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.459359 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.459412 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.459424 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.459445 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.459458 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.464405 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.479721 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.494569 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.508823 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.522799 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.536721 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.553179 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.562153 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.562199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.562211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.562229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.562242 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.585983 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.600947 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:25Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.664641 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.664686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.664701 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.664720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.664733 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.767169 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.767240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.767253 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.767274 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.767286 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.869778 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.869828 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.869842 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.869861 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.869891 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.972377 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.972441 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.972457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.972482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:25 crc kubenswrapper[4878]: I1204 15:36:25.972501 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:25Z","lastTransitionTime":"2025-12-04T15:36:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.074911 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.074946 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.074954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.074971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.074981 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.177839 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.177921 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.177935 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.177954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.177967 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.178535 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:26 crc kubenswrapper[4878]: E1204 15:36:26.178656 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.280306 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.280356 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.280370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.280389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.280401 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.368540 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.371184 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef" exitCode=0 Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.371237 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.382733 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.382789 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.382802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.382824 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.382840 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.392704 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.406789 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.428213 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.443184 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.458414 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.473516 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.485353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.485413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.485429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.485451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.485466 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.488624 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.502424 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.517464 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.534758 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.549622 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.567139 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.582527 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.587381 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.587429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.587444 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.587466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.587482 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.593398 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:26Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.690512 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.690572 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.690583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.690603 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.690614 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.793031 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.793101 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.793114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.793480 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.793522 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.896736 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.896787 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.896799 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.896819 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.896833 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.999777 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.999835 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.999847 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.999882 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:26 crc kubenswrapper[4878]: I1204 15:36:26.999895 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:26Z","lastTransitionTime":"2025-12-04T15:36:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.038648 4878 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.102429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.102493 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.102511 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.102532 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.102547 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.178740 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.178743 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:27 crc kubenswrapper[4878]: E1204 15:36:27.178946 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:27 crc kubenswrapper[4878]: E1204 15:36:27.179046 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.192557 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.205069 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.205132 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.205144 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.205167 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.205181 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.209092 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.222278 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.238259 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.259252 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.279890 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.299037 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.307885 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.307931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.307940 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.307958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.308005 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.312677 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.324404 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.339197 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.350477 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.372434 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.379104 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerStarted","Data":"bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.389947 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.402684 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.409936 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.409984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.409998 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.410018 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.410030 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.416105 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.432156 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.446589 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.461627 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.477068 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.492314 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.506274 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.512930 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.512984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.512996 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.513018 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.513032 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.517603 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.531403 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.546752 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.567114 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.582797 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.598789 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615693 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615727 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615735 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615750 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615766 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.615921 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.718691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.718740 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.718751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.718771 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.718798 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.821659 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.821720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.821730 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.821745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.821755 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.925379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.925433 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.925445 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.925464 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:27 crc kubenswrapper[4878]: I1204 15:36:27.925476 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:27Z","lastTransitionTime":"2025-12-04T15:36:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.028758 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.028842 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.028852 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.028890 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.028900 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.131688 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.131775 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.131790 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.131813 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.131829 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.178914 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.179050 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.240087 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.240149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.240165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.240189 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.240241 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.293761 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.293810 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.293823 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.293845 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.293859 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.311391 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.316047 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.316095 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.316108 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.316126 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.316139 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.328900 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.333705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.333749 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.333762 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.333784 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.333797 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.348283 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.352833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.352911 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.352928 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.352947 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.352961 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.365222 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.370320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.370373 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.370387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.370414 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.370433 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.387393 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: E1204 15:36:28.387593 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.388738 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b" exitCode=0 Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.388796 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.389612 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.389661 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.389680 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.389705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.389724 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.408249 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.423778 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.439823 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.455186 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.467335 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.481213 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.492142 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.492175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.492186 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.492204 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.492220 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.493918 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.509518 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.521811 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.531460 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.546967 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.561341 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.572453 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.591994 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:28Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.594891 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.594938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.594948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.594967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.594986 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.697833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.697862 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.697884 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.697902 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.697912 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.801951 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.801999 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.802022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.802044 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.802055 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.904655 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.904705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.904717 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.904737 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:28 crc kubenswrapper[4878]: I1204 15:36:28.904750 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:28Z","lastTransitionTime":"2025-12-04T15:36:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.007412 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.007479 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.007501 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.007528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.007548 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.110286 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.110524 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.110687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.110764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.110837 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.178690 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.178773 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:29 crc kubenswrapper[4878]: E1204 15:36:29.179014 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:29 crc kubenswrapper[4878]: E1204 15:36:29.179123 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.213196 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.213247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.213259 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.213277 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.213289 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.315892 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.315934 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.315944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.315960 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.315970 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.397573 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.397999 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.398179 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.398237 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.401549 4878 generic.go:334] "Generic (PLEG): container finished" podID="e694bb65-ccd1-4e85-921a-607943be54b2" containerID="43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38" exitCode=0 Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.401597 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerDied","Data":"43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.416706 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.419079 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.419127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.419145 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.419165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.419176 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.432794 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.433326 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.441572 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.457403 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.469255 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.483050 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.496289 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.507615 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.524518 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.524586 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.524629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.524652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.524664 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.527022 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.544657 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.560982 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.576233 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.590975 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.607604 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.621402 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.627408 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.627444 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.627453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.627470 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.627482 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.633170 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.645973 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.658066 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.672834 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.686171 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.699143 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.709260 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.717225 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.729477 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.729512 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.729522 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.729540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.729553 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.731516 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.746102 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.755477 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.773165 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.786651 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.798668 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.831844 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.831912 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.831924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.831939 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.831950 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.934487 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.934522 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.934532 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.934548 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:29 crc kubenswrapper[4878]: I1204 15:36:29.934558 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:29Z","lastTransitionTime":"2025-12-04T15:36:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.039014 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.039070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.039081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.039098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.039107 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.141454 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.141497 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.141505 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.141520 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.141530 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.179265 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:30 crc kubenswrapper[4878]: E1204 15:36:30.179448 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.244810 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.244855 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.244866 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.244898 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.244910 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.348008 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.348064 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.348075 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.348095 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.348110 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.409892 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" event={"ID":"e694bb65-ccd1-4e85-921a-607943be54b2","Type":"ContainerStarted","Data":"bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.426420 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.439821 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.450400 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.450442 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.450454 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.450473 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.450485 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.454807 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.468990 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.481503 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.496107 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.513185 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.525571 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.546408 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.553126 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.553170 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.553184 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.553207 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.553219 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.563764 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.578055 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.604836 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.621258 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.643256 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.656341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.656386 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.656394 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.656415 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.656429 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.759184 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.759240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.759252 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.759271 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.759283 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.861493 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.861531 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.861540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.861556 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.861567 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.964175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.964224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.964234 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.964255 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:30 crc kubenswrapper[4878]: I1204 15:36:30.964266 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:30Z","lastTransitionTime":"2025-12-04T15:36:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.002774 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.002954 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.003065 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:47.003021842 +0000 UTC m=+50.965558798 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.003078 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.003152 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:47.003142495 +0000 UTC m=+50.965679581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.003269 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.003527 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.003606 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:47.003582456 +0000 UTC m=+50.966119412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.067237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.067276 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.067287 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.067305 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.067318 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.104935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.104987 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105155 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105303 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105341 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105356 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105421 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:47.105396847 +0000 UTC m=+51.067933803 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105703 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105731 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.105791 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:36:47.105774947 +0000 UTC m=+51.068311903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.170715 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.170769 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.170785 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.170808 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.170822 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.179527 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.179681 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.179748 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:31 crc kubenswrapper[4878]: E1204 15:36:31.179973 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.273833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.273889 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.273899 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.273917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.273930 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.376563 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.377267 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.377341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.377484 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.377551 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.415798 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/0.log" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.420132 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8" exitCode=1 Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.420187 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.420921 4878 scope.go:117] "RemoveContainer" containerID="a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.438755 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.451447 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.464623 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.479063 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.480602 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.480650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.480666 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.480798 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.480828 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.490809 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.513725 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.526829 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.540082 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.560803 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.582655 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.585117 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.585149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.585158 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.585173 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.585182 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.599498 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.612585 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.629595 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.647631 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:31Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.688339 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.688386 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.688398 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.688416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.688427 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.791785 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.791852 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.791885 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.791907 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.791921 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.899318 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.899567 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.899664 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.899750 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:31 crc kubenswrapper[4878]: I1204 15:36:31.899810 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:31Z","lastTransitionTime":"2025-12-04T15:36:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.002490 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.002527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.002536 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.002552 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.002563 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.105549 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.105604 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.105617 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.105639 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.105653 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.179008 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:32 crc kubenswrapper[4878]: E1204 15:36:32.179168 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.208114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.208356 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.208446 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.208529 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.208593 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.311457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.311751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.311842 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.311942 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.312026 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.415060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.415112 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.415123 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.415144 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.415156 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.425368 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/0.log" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.427518 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.428034 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.439384 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.456224 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.468820 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.480995 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.497107 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.512549 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.517613 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.517673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.517691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.517719 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.517739 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.529167 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.542921 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.556897 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.570472 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.586552 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.599100 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.610114 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.620631 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.620669 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.620682 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.620701 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.620714 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.621840 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.723223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.723265 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.723278 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.723300 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.723315 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.826695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.826741 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.826752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.826770 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.826781 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.902612 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp"] Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.903114 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.905259 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.906997 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.920567 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.929633 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.929677 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.929688 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.929705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.929718 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:32Z","lastTransitionTime":"2025-12-04T15:36:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.940686 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.955737 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:32 crc kubenswrapper[4878]: I1204 15:36:32.970602 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:32Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.027002 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxg2r\" (UniqueName: \"kubernetes.io/projected/63cca643-a7db-4c46-a8eb-350b469d17f5-kube-api-access-wxg2r\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.027124 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.027160 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/63cca643-a7db-4c46-a8eb-350b469d17f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.027195 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.032797 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.032901 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.032918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.032941 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.032965 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.128742 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxg2r\" (UniqueName: \"kubernetes.io/projected/63cca643-a7db-4c46-a8eb-350b469d17f5-kube-api-access-wxg2r\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.129172 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.129379 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/63cca643-a7db-4c46-a8eb-350b469d17f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.129525 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.130342 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.130680 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/63cca643-a7db-4c46-a8eb-350b469d17f5-env-overrides\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.137629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.138193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.138258 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.138279 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.138291 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.141509 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/63cca643-a7db-4c46-a8eb-350b469d17f5-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.149601 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxg2r\" (UniqueName: \"kubernetes.io/projected/63cca643-a7db-4c46-a8eb-350b469d17f5-kube-api-access-wxg2r\") pod \"ovnkube-control-plane-749d76644c-prhdp\" (UID: \"63cca643-a7db-4c46-a8eb-350b469d17f5\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.179357 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.179402 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:33 crc kubenswrapper[4878]: E1204 15:36:33.179551 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:33 crc kubenswrapper[4878]: E1204 15:36:33.179911 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.201201 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.215365 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.215348 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: W1204 15:36:33.229028 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63cca643_a7db_4c46_a8eb_350b469d17f5.slice/crio-824124a51881b1cfe612a3fce7133ebc91b1b31837c1d0a4cc425ee25cd21e88 WatchSource:0}: Error finding container 824124a51881b1cfe612a3fce7133ebc91b1b31837c1d0a4cc425ee25cd21e88: Status 404 returned error can't find the container with id 824124a51881b1cfe612a3fce7133ebc91b1b31837c1d0a4cc425ee25cd21e88 Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.233136 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.240945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.240990 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.241003 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.241025 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.241039 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.251290 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.266052 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.278122 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.290826 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.300652 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.318888 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.331252 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.342654 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:33Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.343668 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.343691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.343703 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.343718 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.343728 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.431655 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" event={"ID":"63cca643-a7db-4c46-a8eb-350b469d17f5","Type":"ContainerStarted","Data":"824124a51881b1cfe612a3fce7133ebc91b1b31837c1d0a4cc425ee25cd21e88"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.445471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.445501 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.445510 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.445525 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.445537 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.548356 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.548411 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.548422 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.548499 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.548513 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.652277 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.652328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.652337 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.652362 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.652373 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.756131 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.756181 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.756194 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.756215 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.756231 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.859683 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.859732 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.859743 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.859765 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.859778 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.962848 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.962924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.962935 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.962954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:33 crc kubenswrapper[4878]: I1204 15:36:33.962967 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:33Z","lastTransitionTime":"2025-12-04T15:36:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.029190 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-k9k9q"] Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.029739 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.029818 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.046631 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.061693 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.065705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.065770 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.065784 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.065806 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.065819 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.074389 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.088673 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.100861 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.113428 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.126258 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.137827 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.140412 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbhl\" (UniqueName: \"kubernetes.io/projected/ab155c5e-9187-4276-98c7-20c0d7e35f4b-kube-api-access-jvbhl\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.140456 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.147037 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.162152 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.168147 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.168240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.168256 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.168277 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.168290 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.175821 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.178564 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.178712 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.189159 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.212542 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.236993 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.241469 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvbhl\" (UniqueName: \"kubernetes.io/projected/ab155c5e-9187-4276-98c7-20c0d7e35f4b-kube-api-access-jvbhl\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.241554 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.241807 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.241912 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:34.741864856 +0000 UTC m=+38.704401832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.256451 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.260416 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvbhl\" (UniqueName: \"kubernetes.io/projected/ab155c5e-9187-4276-98c7-20c0d7e35f4b-kube-api-access-jvbhl\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.269867 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.271731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.271768 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.271780 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.271799 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.271813 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.375094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.375157 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.375177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.375202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.375223 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.439021 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/1.log" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.439747 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/0.log" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.442841 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1" exitCode=1 Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.442900 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.442964 4878 scope.go:117] "RemoveContainer" containerID="a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.444456 4878 scope.go:117] "RemoveContainer" containerID="d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1" Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.444806 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.460212 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.477370 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.478400 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.478457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.478471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.478491 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.478505 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.493714 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.506413 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.517772 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.534491 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.556422 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.570646 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.581415 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.581452 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.581461 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.581478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.581487 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.590213 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.604888 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.617413 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.627410 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.646953 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.661014 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.675539 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.684615 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.684652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.684662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.684680 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.684694 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.690330 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:34Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.746989 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.747150 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:34 crc kubenswrapper[4878]: E1204 15:36:34.747220 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:35.747197312 +0000 UTC m=+39.709734268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.787719 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.787777 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.787792 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.787812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.787825 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.890474 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.890512 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.890522 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.890540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.890550 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.993031 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.993081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.993092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.993111 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:34 crc kubenswrapper[4878]: I1204 15:36:34.993122 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:34Z","lastTransitionTime":"2025-12-04T15:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.095989 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.096040 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.096050 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.096067 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.096079 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.178924 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.178991 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:35 crc kubenswrapper[4878]: E1204 15:36:35.179135 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:35 crc kubenswrapper[4878]: E1204 15:36:35.179294 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.198983 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.199269 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.199397 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.199485 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.199557 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.303186 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.303267 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.303281 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.303304 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.303319 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.406523 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.406578 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.406597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.406630 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.406644 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.449259 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" event={"ID":"63cca643-a7db-4c46-a8eb-350b469d17f5","Type":"ContainerStarted","Data":"23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.449315 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" event={"ID":"63cca643-a7db-4c46-a8eb-350b469d17f5","Type":"ContainerStarted","Data":"7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.451172 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/1.log" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.480794 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.494911 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508790 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508847 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508865 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508919 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.508946 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.520980 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.533178 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.545058 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.564464 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.581372 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.593087 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.608993 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.611413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.611451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.611462 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.611485 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.611498 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.623409 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.636310 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.649682 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.662508 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.676848 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.692017 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:35Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.713763 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.713856 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.713903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.713933 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.713954 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.757352 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:35 crc kubenswrapper[4878]: E1204 15:36:35.757543 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:35 crc kubenswrapper[4878]: E1204 15:36:35.757658 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:37.757629649 +0000 UTC m=+41.720166695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.818112 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.818165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.818175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.818193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.818205 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.922497 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.922549 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.922560 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.922578 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:35 crc kubenswrapper[4878]: I1204 15:36:35.922594 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:35Z","lastTransitionTime":"2025-12-04T15:36:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.025732 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.025783 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.025797 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.025816 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.025830 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.128202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.128257 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.128266 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.128286 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.128298 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.179206 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.179249 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:36 crc kubenswrapper[4878]: E1204 15:36:36.179434 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:36 crc kubenswrapper[4878]: E1204 15:36:36.179594 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.231101 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.231146 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.231157 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.231176 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.231190 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.333958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.334027 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.334037 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.334053 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.334064 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.437731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.438036 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.438126 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.438223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.438321 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.541297 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.541338 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.541346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.541364 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.541376 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.644436 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.644488 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.644501 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.644521 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.644535 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.747245 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.747284 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.747293 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.747311 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.747324 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.850648 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.850703 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.850716 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.850738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.850758 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.953364 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.953408 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.953466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.953485 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:36 crc kubenswrapper[4878]: I1204 15:36:36.953496 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:36Z","lastTransitionTime":"2025-12-04T15:36:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.056909 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.056967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.056982 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.057000 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.057011 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.159136 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.159179 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.159192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.159207 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.159218 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.178590 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.178676 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:37 crc kubenswrapper[4878]: E1204 15:36:37.178730 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:37 crc kubenswrapper[4878]: E1204 15:36:37.178840 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.193333 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.204839 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.216053 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.227720 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.239905 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.253486 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.260650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.260685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.260695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.260713 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.260725 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.270402 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.282578 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.295143 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.327466 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.358255 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.365550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.365600 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.365615 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.365637 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.365650 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.376754 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.396057 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.410829 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.424101 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.434905 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:37Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.467469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.467506 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.467515 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.467530 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.467541 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.569849 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.569908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.569920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.569938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.569954 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.673574 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.673610 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.673618 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.673632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.673643 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.776195 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.776248 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.776263 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.776284 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.776298 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.777825 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:37 crc kubenswrapper[4878]: E1204 15:36:37.778024 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:37 crc kubenswrapper[4878]: E1204 15:36:37.778098 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:41.778074711 +0000 UTC m=+45.740611667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.879346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.879393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.879403 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.879421 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.879434 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.981629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.981662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.981671 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.981687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:37 crc kubenswrapper[4878]: I1204 15:36:37.981695 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:37Z","lastTransitionTime":"2025-12-04T15:36:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.084476 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.084534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.084548 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.084570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.084584 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.179104 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.179196 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.179291 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.179504 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.188017 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.188080 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.188090 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.188109 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.188121 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.290858 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.290926 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.290938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.290956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.290968 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.392859 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.392922 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.392935 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.393064 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.393090 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.495807 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.495887 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.495903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.495923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.495933 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.598981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.599028 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.599042 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.599058 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.599068 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.701359 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.701398 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.701406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.701422 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.701431 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.759052 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.759095 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.759103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.759117 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.759127 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.772143 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:38Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.775811 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.775854 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.775889 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.775908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.775919 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.797341 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:38Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.801716 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.801754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.801765 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.801783 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.801794 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.815531 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:38Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.820610 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.820660 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.820671 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.820689 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.820701 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.833026 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:38Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.836543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.836587 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.836606 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.836629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.836644 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.848733 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:38Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:38 crc kubenswrapper[4878]: E1204 15:36:38.848883 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.850628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.850667 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.850676 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.850692 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.850702 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.954039 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.954089 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.954103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.954122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:38 crc kubenswrapper[4878]: I1204 15:36:38.954136 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:38Z","lastTransitionTime":"2025-12-04T15:36:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.057228 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.057316 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.057328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.057346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.057357 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.160542 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.160596 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.160608 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.160628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.160641 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.178861 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:39 crc kubenswrapper[4878]: E1204 15:36:39.179120 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.179186 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:39 crc kubenswrapper[4878]: E1204 15:36:39.179336 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.263227 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.263296 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.263322 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.263349 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.263366 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.366832 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.366890 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.366900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.366918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.366929 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.468939 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.469199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.469214 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.469233 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.469246 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.571858 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.571953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.571990 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.572022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.572045 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.675116 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.675173 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.675189 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.675213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.675231 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.778246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.778320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.778337 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.778360 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.778379 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.880976 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.881025 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.881035 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.881060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.881075 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.983535 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.983583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.983593 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.983614 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:39 crc kubenswrapper[4878]: I1204 15:36:39.983626 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:39Z","lastTransitionTime":"2025-12-04T15:36:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.086836 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.086905 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.086919 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.086938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.086952 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.179136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.179136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:40 crc kubenswrapper[4878]: E1204 15:36:40.179321 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:40 crc kubenswrapper[4878]: E1204 15:36:40.179358 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.189802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.189901 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.189918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.189942 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.189956 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.293029 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.293107 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.293122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.293145 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.293159 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.396593 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.396658 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.396673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.396694 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.396711 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.499291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.499336 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.499367 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.499387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.499400 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.602212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.602269 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.602279 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.602298 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.602310 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.705428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.705522 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.705533 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.705549 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.705559 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.809007 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.809060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.809070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.809090 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.809101 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.911811 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.911906 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.911923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.911941 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:40 crc kubenswrapper[4878]: I1204 15:36:40.911953 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:40Z","lastTransitionTime":"2025-12-04T15:36:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.014468 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.014504 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.014513 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.014528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.014539 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.116650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.116709 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.116725 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.116744 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.116755 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.179501 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.179544 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:41 crc kubenswrapper[4878]: E1204 15:36:41.179659 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:41 crc kubenswrapper[4878]: E1204 15:36:41.179810 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.219939 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.220012 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.220029 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.220056 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.220076 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.323236 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.323291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.323302 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.323325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.323338 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.426636 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.426691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.426906 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.426923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.426934 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.530143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.530211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.530221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.530237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.530249 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.632865 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.632957 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.632973 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.632993 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.633007 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.735268 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.735307 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.735316 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.735331 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.735340 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.822081 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:41 crc kubenswrapper[4878]: E1204 15:36:41.822245 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:41 crc kubenswrapper[4878]: E1204 15:36:41.822314 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:36:49.82229293 +0000 UTC m=+53.784829886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.837626 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.837687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.837696 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.837711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.837723 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.940277 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.940330 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.940346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.940369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:41 crc kubenswrapper[4878]: I1204 15:36:41.940384 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:41Z","lastTransitionTime":"2025-12-04T15:36:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.043243 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.043303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.043317 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.043345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.043360 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.147035 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.147092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.147110 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.147134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.147151 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.179469 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.179544 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:42 crc kubenswrapper[4878]: E1204 15:36:42.179636 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:42 crc kubenswrapper[4878]: E1204 15:36:42.179795 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.249675 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.249718 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.249731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.249752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.249764 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.351616 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.351662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.351670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.351686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.351695 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.454100 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.454168 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.454178 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.454196 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.454208 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.557038 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.557088 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.557100 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.557116 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.557125 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.659839 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.659914 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.659925 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.659942 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.659953 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.765282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.765338 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.765352 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.765376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.765390 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.867484 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.867556 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.867570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.867590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.867604 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.970134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.970177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.970187 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.970206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:42 crc kubenswrapper[4878]: I1204 15:36:42.970219 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:42Z","lastTransitionTime":"2025-12-04T15:36:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.073567 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.073642 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.073655 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.073678 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.073693 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.176823 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.176893 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.176903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.176923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.176935 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.179231 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.179306 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:43 crc kubenswrapper[4878]: E1204 15:36:43.179392 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:43 crc kubenswrapper[4878]: E1204 15:36:43.179477 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.279211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.279260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.279270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.279287 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.279297 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.381916 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.381953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.381962 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.381976 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.381985 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.484134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.484180 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.484197 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.484215 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.484227 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.586766 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.586826 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.586839 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.586862 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.586898 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.690098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.690174 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.690190 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.690213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.690230 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.793157 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.793211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.793224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.793244 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.793257 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.896512 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.896579 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.896589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.896611 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.896623 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.998936 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.999004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.999018 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.999038 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:43 crc kubenswrapper[4878]: I1204 15:36:43.999053 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:43Z","lastTransitionTime":"2025-12-04T15:36:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.102318 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.102353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.102366 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.102385 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.102395 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.178608 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.178641 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:44 crc kubenswrapper[4878]: E1204 15:36:44.178755 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:44 crc kubenswrapper[4878]: E1204 15:36:44.178856 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.205305 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.205373 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.205383 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.205403 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.205415 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.308287 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.308353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.308365 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.308416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.308427 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.411140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.411192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.411205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.411223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.411237 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.513604 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.513666 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.513684 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.513713 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.513724 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.616305 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.616355 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.616369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.616386 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.616399 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.719003 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.719090 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.719108 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.719134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.719155 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.822961 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.823018 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.823032 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.823054 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.823068 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.925025 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.925094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.925103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.925135 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:44 crc kubenswrapper[4878]: I1204 15:36:44.925154 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:44Z","lastTransitionTime":"2025-12-04T15:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.027504 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.027551 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.027560 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.027577 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.027588 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.130920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.130982 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.130993 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.131019 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.131031 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.179349 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.179375 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:45 crc kubenswrapper[4878]: E1204 15:36:45.179514 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:45 crc kubenswrapper[4878]: E1204 15:36:45.179633 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.234913 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.234971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.234985 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.235010 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.235023 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.338474 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.338535 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.338546 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.338565 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.338578 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.441053 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.441104 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.441114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.441138 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.441151 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.544527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.544586 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.544597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.544615 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.544627 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.648105 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.648150 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.648159 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.648178 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.648189 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.751324 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.751387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.751405 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.751430 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.751447 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.854114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.854153 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.854163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.854179 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.854190 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.956917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.956959 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.956970 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.956988 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:45 crc kubenswrapper[4878]: I1204 15:36:45.957007 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:45Z","lastTransitionTime":"2025-12-04T15:36:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.059459 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.059493 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.059500 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.059517 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.059526 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.162171 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.162216 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.162227 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.162246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.162259 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.179181 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.179195 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:46 crc kubenswrapper[4878]: E1204 15:36:46.179384 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:46 crc kubenswrapper[4878]: E1204 15:36:46.179577 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.264678 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.265528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.265599 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.265632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.265664 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.368903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.368967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.368983 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.369008 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.369024 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.471350 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.471404 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.471421 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.471442 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.471456 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.574697 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.574745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.574754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.574774 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.574786 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.678623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.678673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.678687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.678707 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.678717 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.781060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.781122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.781143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.781166 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.781181 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.884489 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.884526 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.884535 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.884550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.884561 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.947803 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.960558 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:46Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.963308 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.983674 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:46Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.991206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.991266 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.991282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.991307 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:46 crc kubenswrapper[4878]: I1204 15:36:46.991329 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:46Z","lastTransitionTime":"2025-12-04T15:36:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.002710 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:46Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.018503 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.033410 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.046332 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.058578 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.072064 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.085185 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.085219 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.085348 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.085406 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:37:19.08537117 +0000 UTC m=+83.047908126 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.085455 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.085479 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.085549 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:37:19.085527304 +0000 UTC m=+83.048064340 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.085602 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.085653 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:37:19.085646587 +0000 UTC m=+83.048183543 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.093532 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.093574 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.093586 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.093609 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.093619 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.097254 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.110535 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.123480 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.136740 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.148294 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.158082 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.169193 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.179560 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.179685 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.179770 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.179898 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.186808 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.186917 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187010 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187035 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187049 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187072 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187093 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187107 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187108 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:37:19.187087043 +0000 UTC m=+83.149623999 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:47 crc kubenswrapper[4878]: E1204 15:36:47.187176 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:37:19.187157155 +0000 UTC m=+83.149694201 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.192416 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.196206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.196245 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.196255 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.196272 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.196283 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.204354 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.215529 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.233446 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4332d27275e77a2834c57410b04413d292ca5cd4873aaa6c81ee6c6db3f19f8\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:31Z\\\",\\\"message\\\":\\\"vent handler 7 for removal\\\\nI1204 15:36:31.203754 6143 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1204 15:36:31.203769 6143 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1204 15:36:31.203794 6143 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1204 15:36:31.203808 6143 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1204 15:36:31.203813 6143 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1204 15:36:31.203857 6143 factory.go:656] Stopping watch factory\\\\nI1204 15:36:31.203903 6143 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1204 15:36:31.203911 6143 handler.go:208] Removed *v1.Node event handler 2\\\\nI1204 15:36:31.203917 6143 handler.go:208] Removed *v1.Node event handler 7\\\\nI1204 15:36:31.203923 6143 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1204 15:36:31.203929 6143 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1204 15:36:31.203936 6143 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1204 15:36:31.203941 6143 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1204 15:36:31.204090 6143 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.244193 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.256184 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.277098 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.290639 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.300238 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.300303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.300320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.300341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.300355 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.308193 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.322244 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.333382 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.346228 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.362484 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.376489 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.389527 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.402652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.402711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.402730 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.402759 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.402777 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.409609 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.429051 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:47Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.505489 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.505582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.505607 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.505633 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.505653 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.608583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.608663 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.608678 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.608703 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.608719 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.712161 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.712223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.712237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.712258 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.712270 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.814934 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.814981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.814991 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.815009 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.815022 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.918828 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.918916 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.918930 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.918951 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:47 crc kubenswrapper[4878]: I1204 15:36:47.918969 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:47Z","lastTransitionTime":"2025-12-04T15:36:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.022129 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.022203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.022224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.022249 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.022267 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.124555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.124609 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.124621 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.124639 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.124653 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.179577 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.179691 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:48 crc kubenswrapper[4878]: E1204 15:36:48.179803 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:48 crc kubenswrapper[4878]: E1204 15:36:48.179994 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.180750 4878 scope.go:117] "RemoveContainer" containerID="d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.203980 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.219744 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.226755 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.226848 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.226861 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.226897 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.226916 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.238942 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.257450 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.275893 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.291212 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.307972 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.323808 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.329361 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.329401 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.329412 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.329433 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.329446 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.335738 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.348039 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.360763 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.371700 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.394306 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.410095 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.424382 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.432246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.432298 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.432309 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.432327 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.432343 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.439375 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.451338 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.534538 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.534623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.534651 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.534684 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.534707 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.638195 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.638266 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.638290 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.638323 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.638348 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.741466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.741544 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.741644 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.741731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.741758 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.845507 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.845583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.845604 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.845633 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.845654 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.948996 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.949075 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.949095 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.949126 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.949149 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.969457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.969509 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.969521 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.969540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.969555 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:48 crc kubenswrapper[4878]: E1204 15:36:48.983734 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:48Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.990510 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.990570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.990584 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.990605 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:48 crc kubenswrapper[4878]: I1204 15:36:48.990620 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:48Z","lastTransitionTime":"2025-12-04T15:36:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.004935 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.010028 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.010101 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.010113 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.010133 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.010148 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.024270 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.028741 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.028768 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.028778 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.028795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.028805 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.043743 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.049043 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.049099 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.049109 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.049126 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.049137 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.064505 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.064703 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.066834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.066890 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.066907 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.066930 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.066946 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.170416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.170478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.170491 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.170514 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.170528 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.178906 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.178929 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.179141 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.179304 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.273480 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.273835 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.273847 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.273866 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.273897 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.376572 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.376616 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.376627 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.376644 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.376655 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.479082 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.479139 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.479156 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.479176 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.479188 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.510998 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/1.log" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.514683 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.515245 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.535684 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.548063 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.566224 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.581414 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.581457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.581467 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.581486 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.581498 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.594898 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.609529 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.627727 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.645885 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.663254 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.677550 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.684707 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.684754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.684764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.684792 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.684804 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.694086 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.709446 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.723900 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.737907 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.750212 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.765529 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.787610 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.787672 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.787685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.787708 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.787722 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.793578 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.816238 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.891108 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.891141 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.891152 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.891169 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.891184 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.916570 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.916771 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:49 crc kubenswrapper[4878]: E1204 15:36:49.916905 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:37:05.916863436 +0000 UTC m=+69.879400392 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.994391 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.994453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.994465 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.994482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:49 crc kubenswrapper[4878]: I1204 15:36:49.994501 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:49Z","lastTransitionTime":"2025-12-04T15:36:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.097325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.097368 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.097393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.097412 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.097422 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.179466 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.179489 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:50 crc kubenswrapper[4878]: E1204 15:36:50.179627 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:50 crc kubenswrapper[4878]: E1204 15:36:50.179751 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.200134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.200185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.200197 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.200218 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.200231 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.302329 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.302412 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.302437 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.302469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.302494 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.405285 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.405343 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.405355 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.405376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.405389 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.507967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.508025 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.508038 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.508062 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.508075 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.611247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.611328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.611350 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.611395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.611414 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.714000 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.714069 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.714087 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.714111 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.714131 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.817073 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.817122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.817134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.817156 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.817167 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.919983 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.920026 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.920040 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.920061 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:50 crc kubenswrapper[4878]: I1204 15:36:50.920074 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:50Z","lastTransitionTime":"2025-12-04T15:36:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.023385 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.023428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.023441 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.023460 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.023473 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.125699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.125820 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.125842 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.125892 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.125916 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.179557 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.179633 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:51 crc kubenswrapper[4878]: E1204 15:36:51.179804 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:51 crc kubenswrapper[4878]: E1204 15:36:51.180120 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.229206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.229244 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.229254 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.229273 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.229284 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.332269 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.332325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.332338 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.332360 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.332375 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.434598 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.434641 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.434654 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.434671 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.434685 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.524336 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/2.log" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.525455 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/1.log" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.529292 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc" exitCode=1 Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.529356 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.529466 4878 scope.go:117] "RemoveContainer" containerID="d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.530553 4878 scope.go:117] "RemoveContainer" containerID="75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc" Dec 04 15:36:51 crc kubenswrapper[4878]: E1204 15:36:51.530771 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.538205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.538262 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.538291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.538321 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.538344 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.557735 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.574390 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.592465 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.628468 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.640598 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.640670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.640692 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.640724 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.640745 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.646562 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.668304 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.683268 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.698161 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.713597 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.728403 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743056 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743539 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743676 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.743690 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.759516 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.771528 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.784262 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.798678 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.814708 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.826976 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:51Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.846235 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.846282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.846292 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.846310 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.846321 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.948743 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.948786 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.948795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.948816 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:51 crc kubenswrapper[4878]: I1204 15:36:51.948826 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:51Z","lastTransitionTime":"2025-12-04T15:36:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.051327 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.051398 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.051421 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.051455 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.051479 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.154390 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.154434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.154446 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.154464 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.154480 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.178948 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.178950 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:52 crc kubenswrapper[4878]: E1204 15:36:52.179152 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:52 crc kubenswrapper[4878]: E1204 15:36:52.179242 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.257924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.257996 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.258022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.258057 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.258076 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.361076 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.361123 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.361138 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.361162 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.361176 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.464176 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.464237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.464247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.464264 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.464275 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.535010 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/2.log" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.567370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.567440 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.567453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.567474 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.567489 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.670738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.670780 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.670790 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.670807 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.670819 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.773591 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.773630 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.773642 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.773689 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.773704 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.877240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.877306 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.877319 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.877400 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.877415 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.981006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.981063 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.981076 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.981098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:52 crc kubenswrapper[4878]: I1204 15:36:52.981117 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:52Z","lastTransitionTime":"2025-12-04T15:36:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.084825 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.084917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.084928 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.084948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.084960 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.179635 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.179663 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:53 crc kubenswrapper[4878]: E1204 15:36:53.180211 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:53 crc kubenswrapper[4878]: E1204 15:36:53.180324 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.187507 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.187545 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.187555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.187568 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.187580 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.290749 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.290833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.290851 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.290921 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.290958 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.394007 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.394048 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.394057 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.394071 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.394080 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.496376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.496425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.496434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.496448 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.496458 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.599524 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.599580 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.599597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.599625 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.599657 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.702393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.702440 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.702451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.702473 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.702494 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.805125 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.805239 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.805257 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.805285 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.805302 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.908562 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.908646 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.908659 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.908681 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:53 crc kubenswrapper[4878]: I1204 15:36:53.908694 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:53Z","lastTransitionTime":"2025-12-04T15:36:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.011505 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.011576 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.011589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.011607 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.011619 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.114324 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.114393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.114416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.114448 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.114472 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.178818 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.178925 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:54 crc kubenswrapper[4878]: E1204 15:36:54.179050 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:54 crc kubenswrapper[4878]: E1204 15:36:54.179397 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.217664 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.217723 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.217776 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.217798 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.217816 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.321190 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.321245 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.321260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.321282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.321297 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.423929 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.424037 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.424089 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.424119 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.424141 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.526752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.526822 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.526834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.526850 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.526861 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.630751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.630804 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.630815 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.630836 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.630853 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.733638 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.733731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.733745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.733796 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.733810 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.837209 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.837280 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.837291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.837310 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.837323 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.939748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.939799 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.939809 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.939827 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:54 crc kubenswrapper[4878]: I1204 15:36:54.939838 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:54Z","lastTransitionTime":"2025-12-04T15:36:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.043036 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.043131 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.043149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.043169 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.043183 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.146258 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.146328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.146338 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.146358 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.146370 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.179346 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.179425 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:55 crc kubenswrapper[4878]: E1204 15:36:55.180322 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:55 crc kubenswrapper[4878]: E1204 15:36:55.180420 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.250207 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.250270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.250282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.250303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.250317 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.354278 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.354344 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.354358 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.354379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.354397 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.457940 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.458007 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.458024 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.458086 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.458104 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.560900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.560957 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.560971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.560991 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.561002 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.663149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.663200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.663211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.663228 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.663238 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.765907 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.765943 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.765952 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.765968 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.765981 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.867820 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.867927 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.867954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.867985 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.868008 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.971671 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.971731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.971741 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.971760 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:55 crc kubenswrapper[4878]: I1204 15:36:55.971773 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:55Z","lastTransitionTime":"2025-12-04T15:36:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.074254 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.074325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.074339 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.074363 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.074379 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.177636 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.177702 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.177720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.177746 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.177765 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.178803 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.178810 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:56 crc kubenswrapper[4878]: E1204 15:36:56.178979 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:56 crc kubenswrapper[4878]: E1204 15:36:56.179075 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.281185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.281234 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.281243 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.281268 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.281279 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.384470 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.384541 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.384555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.384576 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.384590 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.487005 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.487053 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.487066 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.487085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.487097 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.589948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.589985 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.589995 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.590013 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.590024 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.692997 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.693049 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.693063 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.693082 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.693097 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.796013 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.796057 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.796067 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.796084 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.796095 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.900059 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.900121 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.900136 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.900158 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:56 crc kubenswrapper[4878]: I1204 15:36:56.900175 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:56Z","lastTransitionTime":"2025-12-04T15:36:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.003731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.003784 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.003795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.003816 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.003830 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.106431 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.106480 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.106490 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.106506 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.106523 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.178782 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:57 crc kubenswrapper[4878]: E1204 15:36:57.178996 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.179137 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:57 crc kubenswrapper[4878]: E1204 15:36:57.179294 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.195240 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.209917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.209971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.209988 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.210008 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.210026 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.211919 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.230453 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.243779 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.260429 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.277048 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.292374 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.306117 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.312124 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.312174 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.312187 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.312206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.312219 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.319091 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.331990 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.343250 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.361121 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d23be7d2b200110ee450cedc850c96a9836d8ff77fa250f07ca65c6eb92135e1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:33Z\\\",\\\"message\\\":\\\"32.200337 6313 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1204 15:36:32.200390 6313 ovnkube.go:599] Stopped ovnkube\\\\nI1204 15:36:32.200418 6313 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1204 15:36:32.200431 6313 obj_retry.go:434] periodicallyRetryResources: Retry channel got triggered: retrying failed objects of type *v1.Pod\\\\nI1204 15:36:32.200452 6313 obj_retry.go:409] Going to retry *v1.Pod resource setup for 9 objects: [openshift-image-registry/node-ca-6rrvz openshift-dns/node-resolver-5bgh4 openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-network-diagnostics/network-check-source-55646444c4-trplf openshift-network-node-identity/network-node-identity-vrzqb openshift-ovn-kubernetes/ovnkube-node-qzptn openshift-kube-apiserver/kube-apiserver-crc openshift-machine-config-operator/machine-config-daemon-xrwqw openshift-multus/multus-9p8p7]\\\\nF1204 15:36:32.200474 6313 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopp\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.374258 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.387285 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.398671 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.412830 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.414280 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.414344 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.414357 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.414378 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.414391 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.424769 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:57Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.517720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.517764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.517776 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.517795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.517807 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.620263 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.620363 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.620381 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.620752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.620827 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.724143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.724190 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.724204 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.724221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.724234 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.827172 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.827219 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.827229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.827246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.827257 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.930456 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.930508 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.930521 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.930540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:57 crc kubenswrapper[4878]: I1204 15:36:57.930553 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:57Z","lastTransitionTime":"2025-12-04T15:36:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.033647 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.033686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.033696 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.033712 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.033722 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.136393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.136446 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.136457 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.136472 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.136482 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.179088 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.179211 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:36:58 crc kubenswrapper[4878]: E1204 15:36:58.179280 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:36:58 crc kubenswrapper[4878]: E1204 15:36:58.179444 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.239780 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.239837 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.239850 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.239900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.239919 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.342339 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.342388 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.342399 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.342413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.342424 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.445646 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.445695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.445708 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.445727 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.445739 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.548654 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.548697 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.548708 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.548724 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.548734 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.652078 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.652165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.652181 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.652206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.652224 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.754853 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.754932 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.754944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.754964 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.754977 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.858438 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.858513 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.858526 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.858551 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.858569 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.961953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.962004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.962015 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.962031 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:58 crc kubenswrapper[4878]: I1204 15:36:58.962043 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:58Z","lastTransitionTime":"2025-12-04T15:36:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.066491 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.066571 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.066581 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.066595 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.066606 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.167140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.167192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.167204 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.167222 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.167236 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.179207 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.179382 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.179528 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.179724 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.182616 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:59Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.187487 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.187540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.187550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.187576 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.187590 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.201770 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:59Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.206267 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.206344 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.206359 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.206381 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.206452 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.220279 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:59Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.224342 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.224384 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.224393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.224409 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.224420 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.236636 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:59Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.241020 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.241071 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.241084 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.241106 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.241119 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.253773 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:59Z is after 2025-08-24T17:21:41Z" Dec 04 15:36:59 crc kubenswrapper[4878]: E1204 15:36:59.253968 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.255494 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.255524 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.255536 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.255553 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.255563 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.358617 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.358682 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.358699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.358724 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.358744 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.461681 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.461736 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.461745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.461804 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.461850 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.563804 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.563853 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.563864 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.563910 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.563921 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.668015 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.668056 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.668066 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.668087 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.668100 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.770840 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.770903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.770914 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.770931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.770942 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.873703 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.873782 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.873798 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.873822 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.873855 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.977555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.977619 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.977635 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.977657 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:36:59 crc kubenswrapper[4878]: I1204 15:36:59.977673 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:36:59Z","lastTransitionTime":"2025-12-04T15:36:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.080248 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.080299 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.080308 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.080325 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.080336 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.179169 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.179165 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:00 crc kubenswrapper[4878]: E1204 15:37:00.179344 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:00 crc kubenswrapper[4878]: E1204 15:37:00.179372 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.183311 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.183385 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.183401 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.183425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.183445 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.285861 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.285945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.285954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.285970 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.285981 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.389009 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.389052 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.389061 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.389083 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.389101 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.492061 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.492116 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.492130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.492153 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.492168 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.595674 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.595748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.595764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.595788 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.595805 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.699022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.699102 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.699115 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.699138 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.699153 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.802006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.802055 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.802067 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.802085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.802098 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.904477 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.904518 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.904528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.904546 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:00 crc kubenswrapper[4878]: I1204 15:37:00.904558 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:00Z","lastTransitionTime":"2025-12-04T15:37:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.007306 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.007370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.007393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.007650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.007681 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.111127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.111179 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.111188 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.111205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.111218 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.179167 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.179218 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:01 crc kubenswrapper[4878]: E1204 15:37:01.179336 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:01 crc kubenswrapper[4878]: E1204 15:37:01.179538 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.213119 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.213163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.213175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.213192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.213203 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.316156 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.316202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.316210 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.316225 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.316235 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.419182 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.419258 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.419272 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.419298 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.419313 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.522147 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.522237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.522247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.522265 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.522278 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.625752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.625820 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.625832 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.625852 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.625865 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.729184 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.729224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.729233 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.729252 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.729262 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.832128 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.832191 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.832203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.832223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.832238 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.935006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.935060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.935074 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.935102 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:01 crc kubenswrapper[4878]: I1204 15:37:01.935133 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:01Z","lastTransitionTime":"2025-12-04T15:37:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.038070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.038118 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.038127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.038142 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.038152 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.140482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.140532 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.140543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.140562 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.140576 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.179302 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:02 crc kubenswrapper[4878]: E1204 15:37:02.179479 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.179309 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:02 crc kubenswrapper[4878]: E1204 15:37:02.179752 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.243669 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.243727 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.243742 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.243768 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.243778 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.347301 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.347380 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.347395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.347420 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.347447 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.451071 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.451129 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.451143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.451167 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.451182 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.554150 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.554198 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.554212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.554233 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.554247 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.656466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.656513 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.656524 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.656543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.656554 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.759291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.759355 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.759365 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.759384 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.759402 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.863010 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.863071 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.863085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.863103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.863114 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.965846 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.965930 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.965942 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.965958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:02 crc kubenswrapper[4878]: I1204 15:37:02.965968 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:02Z","lastTransitionTime":"2025-12-04T15:37:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.069701 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.069757 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.069769 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.069788 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.069801 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.173324 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.173391 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.173404 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.173424 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.173439 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.179717 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.179779 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:03 crc kubenswrapper[4878]: E1204 15:37:03.179908 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:03 crc kubenswrapper[4878]: E1204 15:37:03.180103 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.276315 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.276379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.276395 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.276422 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.276443 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.379329 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.379372 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.379383 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.379404 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.379414 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.482137 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.482188 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.482197 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.482213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.482224 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.585212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.585274 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.585284 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.585302 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.585315 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.688014 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.688106 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.688120 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.688144 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.688161 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.790691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.790739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.790752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.790773 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.790787 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.893861 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.893923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.893933 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.893948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.893960 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.996403 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.996447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.996456 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.996475 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:03 crc kubenswrapper[4878]: I1204 15:37:03.996486 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:03Z","lastTransitionTime":"2025-12-04T15:37:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.098826 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.098891 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.098904 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.098922 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.098936 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.179165 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:04 crc kubenswrapper[4878]: E1204 15:37:04.179344 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.179437 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:04 crc kubenswrapper[4878]: E1204 15:37:04.179507 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.201537 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.201618 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.201638 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.201660 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.201674 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.304416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.304468 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.304478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.304495 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.304507 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.407270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.407332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.407346 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.407367 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.407381 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.514813 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.514899 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.514912 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.514933 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.514954 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.617697 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.617744 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.617780 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.617801 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.617813 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.721150 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.721200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.721213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.721263 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.721276 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.824190 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.824249 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.824260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.824281 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.824294 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.926888 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.926925 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.926934 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.926951 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:04 crc kubenswrapper[4878]: I1204 15:37:04.926965 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:04Z","lastTransitionTime":"2025-12-04T15:37:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.029471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.029531 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.029543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.029564 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.029578 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.132634 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.132685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.132700 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.132719 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.132732 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.178648 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.178690 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:05 crc kubenswrapper[4878]: E1204 15:37:05.178802 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:05 crc kubenswrapper[4878]: E1204 15:37:05.178948 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.235955 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.236033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.236045 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.236072 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.236087 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.339390 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.339447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.339462 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.339482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.339496 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.442864 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.442961 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.442975 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.442996 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.443016 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.546420 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.546473 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.546485 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.546503 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.546514 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.648629 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.648675 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.648686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.648705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.648717 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.752023 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.752071 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.752082 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.752109 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.752122 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.854990 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.855020 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.855030 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.855046 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.855057 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.958021 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.958063 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.958074 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.958094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:05 crc kubenswrapper[4878]: I1204 15:37:05.958105 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:05Z","lastTransitionTime":"2025-12-04T15:37:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.001768 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:06 crc kubenswrapper[4878]: E1204 15:37:06.002035 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:37:06 crc kubenswrapper[4878]: E1204 15:37:06.002150 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:37:38.002124984 +0000 UTC m=+101.964661940 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.060477 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.060531 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.060540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.060560 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.060574 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.163565 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.163622 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.163632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.163652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.163664 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.179058 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.179255 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:06 crc kubenswrapper[4878]: E1204 15:37:06.179559 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:06 crc kubenswrapper[4878]: E1204 15:37:06.179889 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.180012 4878 scope.go:117] "RemoveContainer" containerID="75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc" Dec 04 15:37:06 crc kubenswrapper[4878]: E1204 15:37:06.180351 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.191781 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.209247 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.225576 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.239455 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.254607 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.266611 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.266656 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.266669 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.266686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.266697 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.268930 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.280820 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.300692 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.316181 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.330902 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.347353 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.364432 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.369335 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.369380 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.369391 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.369406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.369420 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.380231 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.396289 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.416501 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.434406 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.449958 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:06Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.473447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.473501 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.473515 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.473537 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.473551 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.576249 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.576309 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.576320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.576341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.576353 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.679287 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.679328 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.679340 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.679360 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.679373 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.781957 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.782022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.782033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.782053 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.782062 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.884353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.884416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.884429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.884451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.884464 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.986593 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.986640 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.986656 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.986673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:06 crc kubenswrapper[4878]: I1204 15:37:06.986684 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:06Z","lastTransitionTime":"2025-12-04T15:37:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.089582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.089631 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.089645 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.089664 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.089674 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.179687 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.179797 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:07 crc kubenswrapper[4878]: E1204 15:37:07.179833 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:07 crc kubenswrapper[4878]: E1204 15:37:07.179991 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.192587 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.192648 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.192666 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.192689 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.192702 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.197757 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.219149 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.234377 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.247292 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.262324 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.281092 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.294015 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.295192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.295227 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.295238 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.295255 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.295268 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.315545 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.330056 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.347745 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.372473 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.397982 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.398037 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.398052 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.398073 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.398087 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.411347 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.431413 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.447033 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.463864 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.478145 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.492097 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:07Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.500625 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.500653 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.500668 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.500688 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.500703 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.603384 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.603466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.603503 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.603528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.603547 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.706824 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.706904 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.706917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.706939 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.706958 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.809745 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.809799 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.809810 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.809829 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.809843 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.912597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.912640 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.912651 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.912668 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:07 crc kubenswrapper[4878]: I1204 15:37:07.912679 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:07Z","lastTransitionTime":"2025-12-04T15:37:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.015890 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.015956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.015966 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.015987 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.016001 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.118564 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.118651 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.118676 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.118704 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.118723 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.179455 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.179597 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:08 crc kubenswrapper[4878]: E1204 15:37:08.179683 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:08 crc kubenswrapper[4878]: E1204 15:37:08.179830 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.222371 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.222430 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.222445 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.222466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.222483 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.325163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.325204 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.325215 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.325234 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.325245 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.428824 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.429332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.429351 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.429373 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.429391 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.532149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.532199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.532211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.532232 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.532247 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.594614 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/0.log" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.594672 4878 generic.go:334] "Generic (PLEG): container finished" podID="c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757" containerID="b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1" exitCode=1 Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.594710 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerDied","Data":"b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.595215 4878 scope.go:117] "RemoveContainer" containerID="b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.614560 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.630710 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.635889 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.635949 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.635964 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.635989 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.636003 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.644727 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.663026 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.677789 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.692857 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.706276 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.722070 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.736202 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.738632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.738674 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.738687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.738708 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.738719 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.749622 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.768494 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.781706 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.794746 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.816581 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.834384 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.841167 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.841227 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.841245 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.841265 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.841280 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.848544 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.861762 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:08Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.944853 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.944915 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.944923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.944940 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:08 crc kubenswrapper[4878]: I1204 15:37:08.944953 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:08Z","lastTransitionTime":"2025-12-04T15:37:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.048235 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.048300 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.048317 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.048337 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.048349 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.152559 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.152630 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.152644 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.152668 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.152682 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.178855 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.179038 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.179250 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.179526 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.256022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.256199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.256237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.256259 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.256272 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.359243 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.359303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.359320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.359341 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.359353 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.462362 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.462428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.462443 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.462466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.462481 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.514607 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.514678 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.514691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.514712 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.514727 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.528968 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.533597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.533650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.533661 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.533682 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.533703 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.548462 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.553333 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.553397 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.553409 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.553425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.553436 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.569274 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.573207 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.573242 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.573253 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.573268 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.573277 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.586193 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.590003 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.590081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.590101 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.590125 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.590145 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.599650 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/0.log" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.599713 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerStarted","Data":"9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0"} Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.603992 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: E1204 15:37:09.604141 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.605653 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.605709 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.605722 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.605739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.605749 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.614059 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.629321 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.643827 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.656540 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.674954 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.690931 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.705681 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.708559 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.708652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.708668 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.708693 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.708708 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.725010 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.738304 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.752226 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.764636 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.776232 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.797921 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.809598 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.810964 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.811031 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.811044 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.811065 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.811079 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.822406 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.837214 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.852705 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:09Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.913761 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.913838 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.913863 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.913922 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:09 crc kubenswrapper[4878]: I1204 15:37:09.913942 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:09Z","lastTransitionTime":"2025-12-04T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.017086 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.017156 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.017169 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.017189 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.017200 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.119285 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.119334 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.119343 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.119359 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.119370 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.179153 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.179177 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:10 crc kubenswrapper[4878]: E1204 15:37:10.179310 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:10 crc kubenswrapper[4878]: E1204 15:37:10.179418 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.222229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.222282 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.222293 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.222311 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.222322 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.325583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.325628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.325637 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.325655 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.325665 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.429120 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.429177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.429187 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.429210 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.429223 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.531864 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.531925 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.531936 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.531952 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.531962 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.635196 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.635272 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.635284 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.635307 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.635321 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.739176 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.739269 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.739281 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.739301 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.739374 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.842628 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.842673 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.842683 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.842729 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.842742 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.945815 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.945859 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.945881 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.945899 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:10 crc kubenswrapper[4878]: I1204 15:37:10.945914 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:10Z","lastTransitionTime":"2025-12-04T15:37:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.049317 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.049379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.049397 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.049425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.049441 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.152685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.152752 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.152769 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.152789 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.152807 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.179652 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.179724 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:11 crc kubenswrapper[4878]: E1204 15:37:11.179839 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:11 crc kubenswrapper[4878]: E1204 15:37:11.179915 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.255945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.255984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.255993 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.256010 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.256021 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.358975 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.359029 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.359041 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.359061 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.359076 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.466066 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.466114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.466124 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.466143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.466152 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.568921 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.569013 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.569039 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.569063 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.569077 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.671927 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.671991 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.672004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.672023 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.672041 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.775543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.775594 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.775605 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.775627 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.775640 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.878314 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.878369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.878381 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.878402 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.878416 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.981280 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.981350 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.981366 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.981387 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:11 crc kubenswrapper[4878]: I1204 15:37:11.981402 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:11Z","lastTransitionTime":"2025-12-04T15:37:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.084318 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.084368 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.084377 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.084394 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.084405 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.179250 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.179308 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:12 crc kubenswrapper[4878]: E1204 15:37:12.179452 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:12 crc kubenswrapper[4878]: E1204 15:37:12.179589 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.187511 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.187556 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.187572 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.187589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.187602 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.290554 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.290612 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.290626 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.290654 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.290667 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.393505 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.393555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.393567 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.393581 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.393592 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.497200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.497280 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.497299 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.497326 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.497348 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.601055 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.601130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.601158 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.601185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.601209 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.705254 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.705361 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.705377 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.705401 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.705415 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.807917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.807994 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.808007 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.808027 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.808039 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.911160 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.911215 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.911228 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.911246 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:12 crc kubenswrapper[4878]: I1204 15:37:12.911259 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:12Z","lastTransitionTime":"2025-12-04T15:37:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.014143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.014196 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.014206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.014227 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.014240 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.117380 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.117450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.117463 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.117479 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.117489 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.179073 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.179158 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:13 crc kubenswrapper[4878]: E1204 15:37:13.179263 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:13 crc kubenswrapper[4878]: E1204 15:37:13.179321 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.221114 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.221180 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.221193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.221213 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.221226 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.324136 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.324216 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.324232 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.324251 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.324268 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.427382 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.427470 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.427485 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.427509 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.427527 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.530724 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.530797 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.530812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.530834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.530851 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.634862 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.635012 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.635031 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.635062 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.635088 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.738175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.738247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.738259 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.738275 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.738287 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.841460 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.841527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.841539 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.841554 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.841564 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.944825 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.944881 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.944892 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.944918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:13 crc kubenswrapper[4878]: I1204 15:37:13.944930 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:13Z","lastTransitionTime":"2025-12-04T15:37:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.047828 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.047903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.047915 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.047932 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.047942 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.150473 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.150521 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.150534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.150551 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.150562 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.179263 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.179336 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:14 crc kubenswrapper[4878]: E1204 15:37:14.179456 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:14 crc kubenswrapper[4878]: E1204 15:37:14.179654 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.253523 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.253592 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.253603 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.253622 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.253634 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.356738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.356801 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.356813 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.356827 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.356837 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.459370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.459436 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.459445 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.459468 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.459480 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.562651 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.562717 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.562728 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.562749 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.562761 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.669916 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.669981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.669995 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.670016 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.670030 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.773382 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.773434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.773446 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.773464 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.773477 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.876193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.876257 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.876271 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.876293 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.876309 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.978127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.978365 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.978378 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.978396 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:14 crc kubenswrapper[4878]: I1204 15:37:14.978407 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:14Z","lastTransitionTime":"2025-12-04T15:37:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.081783 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.081927 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.081941 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.081961 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.081973 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.179002 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.179002 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:15 crc kubenswrapper[4878]: E1204 15:37:15.179256 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:15 crc kubenswrapper[4878]: E1204 15:37:15.179165 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.183992 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.184037 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.184046 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.184062 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.184075 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.287122 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.287210 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.287221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.287238 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.287248 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.390221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.390275 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.390290 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.390310 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.390328 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.493832 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.493919 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.493931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.493948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.493961 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.597092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.597142 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.597157 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.597180 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.597196 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.703967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.704028 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.704038 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.704056 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.704067 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.807049 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.807089 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.807100 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.807119 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.807131 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.909409 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.909453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.909463 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.909478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:15 crc kubenswrapper[4878]: I1204 15:37:15.909488 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:15Z","lastTransitionTime":"2025-12-04T15:37:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.012758 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.012833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.012846 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.012865 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.012899 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.115592 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.115666 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.115683 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.115703 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.115716 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.178706 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.178775 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:16 crc kubenswrapper[4878]: E1204 15:37:16.178942 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:16 crc kubenswrapper[4878]: E1204 15:37:16.179124 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.218944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.219005 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.219022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.219039 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.219050 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.322135 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.322175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.322186 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.322201 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.322213 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.425103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.425165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.425177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.425203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.425216 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.528240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.528304 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.528401 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.528427 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.528440 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.631139 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.631183 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.631193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.631212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.631226 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.734192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.734237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.734248 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.734266 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.734276 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.837469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.837529 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.837540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.837556 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.837568 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.941223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.941285 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.941300 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.941318 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:16 crc kubenswrapper[4878]: I1204 15:37:16.941332 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:16Z","lastTransitionTime":"2025-12-04T15:37:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.043944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.044006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.044026 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.044051 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.044070 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.146624 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.146690 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.146700 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.146717 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.146728 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.179201 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:17 crc kubenswrapper[4878]: E1204 15:37:17.179376 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.179554 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:17 crc kubenswrapper[4878]: E1204 15:37:17.179687 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.193725 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.206915 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.220042 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.232477 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.246171 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.249333 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.249376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.249386 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.249406 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.249416 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.258847 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.274832 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.288048 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.302581 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.318023 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.338318 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.351711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.351757 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.351769 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.351788 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.351802 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.353110 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.368531 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.383826 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.396243 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.415890 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.429654 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:17Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.454507 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.454553 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.454566 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.454589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.454605 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.556527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.556580 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.556593 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.556611 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.556623 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.658848 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.658910 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.658920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.658936 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.658949 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.762558 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.763415 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.763471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.763534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.763570 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.867334 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.867392 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.867408 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.867428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.867443 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.970834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.970915 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.970933 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.970957 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:17 crc kubenswrapper[4878]: I1204 15:37:17.970977 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:17Z","lastTransitionTime":"2025-12-04T15:37:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.073221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.073718 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.073738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.073764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.073783 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.176852 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.176916 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.176928 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.176946 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.176958 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.179123 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.179191 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:18 crc kubenswrapper[4878]: E1204 15:37:18.179239 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:18 crc kubenswrapper[4878]: E1204 15:37:18.179328 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.279376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.279422 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.279437 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.279455 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.279469 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.382154 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.382230 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.382241 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.382260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.382295 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.484687 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.484720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.484731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.484748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.484761 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.587527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.587605 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.587622 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.587652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.587671 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.690940 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.691009 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.691026 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.691052 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.691070 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.793494 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.793565 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.793580 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.793600 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.793614 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.896851 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.896908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.896918 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.896935 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.896946 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.999807 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:18 crc kubenswrapper[4878]: I1204 15:37:18.999863 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:18.999911 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:18.999965 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:18.999979 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:18Z","lastTransitionTime":"2025-12-04T15:37:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.103362 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.103399 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.103409 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.103425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.103435 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.152754 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.152961 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.152994 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.153083 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.153119 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:23.153070036 +0000 UTC m=+147.115606992 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.153194 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:38:23.153181099 +0000 UTC m=+147.115718055 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.153300 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.153448 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:38:23.153415515 +0000 UTC m=+147.115952671 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.179410 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.179607 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.179615 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.180588 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.180750 4878 scope.go:117] "RemoveContainer" containerID="75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.197804 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.206182 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.206228 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.206240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.206262 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.206276 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.254354 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.254431 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254602 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254631 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254647 4878 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254710 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-04 15:38:23.254691802 +0000 UTC m=+147.217228778 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254602 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254754 4878 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254772 4878 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.254826 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-04 15:38:23.254808085 +0000 UTC m=+147.217345061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.310025 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.310106 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.310140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.310159 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.310171 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.412820 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.412866 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.412894 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.412915 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.412929 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.516042 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.516084 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.516093 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.516110 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.516122 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.619607 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.619681 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.619694 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.619714 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.619726 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.665070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.665140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.665155 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.665181 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.665194 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.680856 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:19Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.686257 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.686308 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.686337 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.686362 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.686378 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.701915 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:19Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.707062 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.707117 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.707130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.707151 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.707167 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.723608 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:19Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.729502 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.729560 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.729572 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.729598 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.729615 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.745963 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:19Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.750667 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.750700 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.750711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.750731 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.750747 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.765837 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:19Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:19 crc kubenswrapper[4878]: E1204 15:37:19.766018 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.768331 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.768382 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.768402 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.768424 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.768437 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.872852 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.872945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.872959 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.872981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.872995 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.975572 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.975638 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.975652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.975670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:19 crc kubenswrapper[4878]: I1204 15:37:19.975684 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:19Z","lastTransitionTime":"2025-12-04T15:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.081340 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.081392 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.081403 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.081423 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.081440 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.179659 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.179688 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:20 crc kubenswrapper[4878]: E1204 15:37:20.179811 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:20 crc kubenswrapper[4878]: E1204 15:37:20.179946 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.184389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.184427 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.184437 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.184452 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.184477 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.287027 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.287099 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.287110 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.287127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.287138 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.390211 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.390289 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.390301 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.390321 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.390334 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.493699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.493746 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.493757 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.493781 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.493793 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.597155 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.597197 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.597205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.597226 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.597240 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.636913 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/2.log" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.640551 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.641067 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.665759 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"497fbfd8-66c7-4113-8fce-f6e4a543692e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3370c6be8d898bbe818ee571c5c413010c6934e2c04d2d1701fc8067cfd4b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4f9215e74438731e57fa6f60900340bc1ea89257cc0fdf3b8480c8858fc4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5418b30f6c66f72a1d99bc42e3e44d2c5eae369a8e24edb1dbab42d10f7dad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7e072b3a4e2b73e6d25c66598ba414fb27c262e40e4af238fc79d9cac3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1cf227c41856e35ed5433f312c767cf4257aca2189bf3a2a00300b795ea3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.685308 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.700206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.700303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.700316 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.700336 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.700350 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.708555 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.723778 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.740444 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.758739 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.771474 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.785476 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.802917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.802965 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.802978 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.803001 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.803015 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.811111 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.827473 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.846492 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.863250 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.880040 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.897346 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.905908 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.905953 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.905965 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.905981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.905991 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:20Z","lastTransitionTime":"2025-12-04T15:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.915044 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.932696 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.947527 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:20 crc kubenswrapper[4878]: I1204 15:37:20.958859 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:20Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.008332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.008382 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.008393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.008410 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.008421 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.111746 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.111790 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.111801 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.111819 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.111829 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.179570 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.179675 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:21 crc kubenswrapper[4878]: E1204 15:37:21.179799 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:21 crc kubenswrapper[4878]: E1204 15:37:21.180072 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.194804 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.214856 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.214925 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.214937 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.214952 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.214961 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.318294 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.318360 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.318370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.318386 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.318398 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.421534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.421580 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.421591 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.421608 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.421622 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.528086 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.528185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.528206 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.528234 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.528261 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.631924 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.631976 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.631987 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.632005 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.632017 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.646655 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/3.log" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.647531 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/2.log" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.651039 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" exitCode=1 Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.651091 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.651154 4878 scope.go:117] "RemoveContainer" containerID="75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.651860 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:37:21 crc kubenswrapper[4878]: E1204 15:37:21.652082 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.668038 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.688648 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"497fbfd8-66c7-4113-8fce-f6e4a543692e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3370c6be8d898bbe818ee571c5c413010c6934e2c04d2d1701fc8067cfd4b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4f9215e74438731e57fa6f60900340bc1ea89257cc0fdf3b8480c8858fc4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5418b30f6c66f72a1d99bc42e3e44d2c5eae369a8e24edb1dbab42d10f7dad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7e072b3a4e2b73e6d25c66598ba414fb27c262e40e4af238fc79d9cac3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1cf227c41856e35ed5433f312c767cf4257aca2189bf3a2a00300b795ea3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.706041 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.722588 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.734698 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.734993 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.735078 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.735157 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.735218 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.736337 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.749235 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.759561 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.769288 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.788663 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://75274423ceea2ea13f7a5cb7df003896d72e6c816b396a2c28e1e4a3161001cc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:36:50Z\\\",\\\"message\\\":\\\"04 15:36:50.017485 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-qzptn\\\\nF1204 15:36:50.017506 6519 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:36:49Z is after 2025-08-24T17:21:41Z]\\\\nI1204 15:36:50.017499 6519 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1204 15:36:50.017511 6519 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-controller-manager/kube-controller-manager-crc\\\\nI1204 15:36:50.017518 6519 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:21Z\\\",\\\"message\\\":\\\"c64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 15:37:21.117433 6930 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1204 15:37:21.117498 6930 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.801986 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.814635 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.828080 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f020a1-3429-4db0-9426-c21895bedd1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://963a0243e0fc1bae361a187173501210b2d41a84bb276cfcec39a4f69935422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.838080 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.838132 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.838144 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.838162 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.838182 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.845473 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.858957 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.870209 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.885045 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.902069 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.919169 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.934443 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:21Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.940537 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.940597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.940610 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.940634 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:21 crc kubenswrapper[4878]: I1204 15:37:21.940647 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:21Z","lastTransitionTime":"2025-12-04T15:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.043552 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.043605 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.043616 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.043635 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.043647 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.147541 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.147621 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.147646 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.147681 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.147706 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.179615 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.179615 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:22 crc kubenswrapper[4878]: E1204 15:37:22.179815 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:22 crc kubenswrapper[4878]: E1204 15:37:22.179958 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.250333 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.250402 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.250413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.250436 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.250458 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.353389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.353483 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.353492 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.353510 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.353521 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.456920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.456955 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.456965 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.456981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.456990 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.559716 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.559765 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.559775 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.559795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.559808 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.657299 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/3.log" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.661900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.661949 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.661961 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.661984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.661995 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.662247 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:37:22 crc kubenswrapper[4878]: E1204 15:37:22.662620 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.675921 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.689847 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.703793 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.734758 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:21Z\\\",\\\"message\\\":\\\"c64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 15:37:21.117433 6930 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1204 15:37:21.117498 6930 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:37:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.747858 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.761513 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f020a1-3429-4db0-9426-c21895bedd1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://963a0243e0fc1bae361a187173501210b2d41a84bb276cfcec39a4f69935422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.764691 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.764780 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.764791 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.764809 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.764841 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.779001 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.793891 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.806909 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.821795 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.835671 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.850397 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.862541 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.866818 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.866855 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.866866 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.866901 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.866912 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.874410 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.893979 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"497fbfd8-66c7-4113-8fce-f6e4a543692e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3370c6be8d898bbe818ee571c5c413010c6934e2c04d2d1701fc8067cfd4b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4f9215e74438731e57fa6f60900340bc1ea89257cc0fdf3b8480c8858fc4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5418b30f6c66f72a1d99bc42e3e44d2c5eae369a8e24edb1dbab42d10f7dad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7e072b3a4e2b73e6d25c66598ba414fb27c262e40e4af238fc79d9cac3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1cf227c41856e35ed5433f312c767cf4257aca2189bf3a2a00300b795ea3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.911433 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.925535 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.938562 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.949427 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:22Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.969478 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.969532 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.969544 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.969564 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:22 crc kubenswrapper[4878]: I1204 15:37:22.969576 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:22Z","lastTransitionTime":"2025-12-04T15:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.072024 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.072093 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.072111 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.072130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.072148 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.175528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.175582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.175592 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.175610 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.175622 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.178924 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:23 crc kubenswrapper[4878]: E1204 15:37:23.179066 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.178924 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:23 crc kubenswrapper[4878]: E1204 15:37:23.179276 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.278797 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.278855 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.278898 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.278920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.278932 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.382239 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.382302 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.382315 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.382335 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.382348 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.485198 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.485276 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.485544 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.485570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.485585 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.588620 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.588725 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.588747 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.588770 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.588785 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.692015 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.692084 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.692099 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.692124 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.692144 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.795499 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.795570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.795587 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.795612 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.795627 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.898076 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.898147 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.898163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.898185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:23 crc kubenswrapper[4878]: I1204 15:37:23.898207 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:23Z","lastTransitionTime":"2025-12-04T15:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.001704 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.001781 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.001796 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.001823 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.001838 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.105597 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.105654 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.105670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.105690 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.105703 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.179068 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.179088 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:24 crc kubenswrapper[4878]: E1204 15:37:24.179262 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:24 crc kubenswrapper[4878]: E1204 15:37:24.179348 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.208413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.208468 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.208479 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.208497 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.208511 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.311418 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.311462 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.311473 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.311491 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.311505 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.413695 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.413766 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.413776 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.413798 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.413811 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.516455 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.516517 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.516528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.516547 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.516559 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.620521 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.620582 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.620594 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.620613 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.620623 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.724189 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.724701 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.724764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.724882 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.724964 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.827746 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.827791 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.827805 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.827825 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.827841 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.931161 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.931224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.931237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.931261 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:24 crc kubenswrapper[4878]: I1204 15:37:24.931279 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:24Z","lastTransitionTime":"2025-12-04T15:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.035081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.035134 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.035146 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.035163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.035175 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.138013 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.138098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.138168 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.138217 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.138244 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.179271 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.179337 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:25 crc kubenswrapper[4878]: E1204 15:37:25.179548 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:25 crc kubenswrapper[4878]: E1204 15:37:25.179752 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.241590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.241642 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.241653 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.241671 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.241684 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.344812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.344876 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.344931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.344956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.344974 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.448098 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.448139 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.448149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.448165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.448175 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.551068 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.551115 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.551127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.551141 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.551151 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.653890 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.653938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.653952 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.653970 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.653981 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.757099 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.757162 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.757179 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.757202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.757219 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.860410 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.860462 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.860471 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.860487 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.860498 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.963242 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.963299 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.963311 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.963353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:25 crc kubenswrapper[4878]: I1204 15:37:25.963364 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:25Z","lastTransitionTime":"2025-12-04T15:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.066465 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.066516 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.066529 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.066550 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.066561 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.170297 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.170342 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.170353 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.170370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.170380 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.178994 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.179031 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:26 crc kubenswrapper[4878]: E1204 15:37:26.179185 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:26 crc kubenswrapper[4878]: E1204 15:37:26.179293 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.273141 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.273188 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.273200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.273220 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.273233 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.375973 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.376023 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.376034 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.376052 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.376063 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.478903 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.478981 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.478996 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.479014 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.479027 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.582308 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.582359 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.582370 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.582389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.582403 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.684862 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.684942 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.684954 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.685004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.685024 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.788547 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.788623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.788635 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.788654 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.788669 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.891843 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.891948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.891958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.891978 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.891989 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.994847 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.994932 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.994946 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.994964 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:26 crc kubenswrapper[4878]: I1204 15:37:26.994975 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:26Z","lastTransitionTime":"2025-12-04T15:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.098202 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.098259 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.098270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.098289 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.098304 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.179574 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.179622 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:27 crc kubenswrapper[4878]: E1204 15:37:27.179827 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:27 crc kubenswrapper[4878]: E1204 15:37:27.179990 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.195142 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d3e0dd9ea4f4137a46ef05cee6beb7ab349d89519e1d85c51510aa0c3466b717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257b7e52356b4813c5a3a495689cff8bb38da799a877d36ad173808a1dd00914\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.200692 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.200729 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.200739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.200754 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.200766 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.210357 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.226260 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9p8p7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:07Z\\\",\\\"message\\\":\\\"2025-12-04T15:36:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8\\\\n2025-12-04T15:36:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5d2d1d17-3453-402e-bdf0-28468c42ead8 to /host/opt/cni/bin/\\\\n2025-12-04T15:36:22Z [verbose] multus-daemon started\\\\n2025-12-04T15:36:22Z [verbose] Readiness Indicator file check\\\\n2025-12-04T15:37:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:37:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lkt5z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9p8p7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.240092 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab155c5e-9187-4276-98c7-20c0d7e35f4b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jvbhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-k9k9q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.253065 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7929c074c840b4e39b453620f37ca8856be200240b25527c60b7ff864a43e64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.266572 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.285814 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e694bb65-ccd1-4e85-921a-607943be54b2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bcc925bb8d6fae08c7b9519363212c60a31fc495ded2ffa4710bf82af95befa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c0fee1dba169432564614d9edda1dc8dde0129a1d35e3de6b29ca7b22abba1a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://55e035dfe6583cd78b1abb13236a305284d3f7384d1793e7a233e75da736ae63\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://315eb051285f906082d86654e31af857956c5aa72b212bddd2d6bfc99c348eb0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://877ebd388d36349832c814bc330a9b7e03f20a681864d27be2c19b6c71034fef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd68077d9ea4f90eb7b1959639c4b8fed4b449e350479bd26681222209c6251b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://43226d790c178065a944d1cfbaf6e9d4945b904bc16ea4a030c97b11c1c2bc38\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q4hlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xrkl9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.297986 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5dfa7734ff17232776a5cbd3deffa1c935319122ca391fcf552f70900df55f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nkmlq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-xrwqw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.303064 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.303102 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.303111 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.303127 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.303139 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.310976 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6rrvz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"253bac41-fb3d-4fa1-8586-30fb4b47ea9a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbb5ec7860e99c12fce1c19c26d41f4e8002bc491d5b6e807bff217c9d647abd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sgkw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6rrvz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.329626 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"497fbfd8-66c7-4113-8fce-f6e4a543692e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3370c6be8d898bbe818ee571c5c413010c6934e2c04d2d1701fc8067cfd4b25f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e4f9215e74438731e57fa6f60900340bc1ea89257cc0fdf3b8480c8858fc4e1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5418b30f6c66f72a1d99bc42e3e44d2c5eae369a8e24edb1dbab42d10f7dad5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bdd7e072b3a4e2b73e6d25c66598ba414fb27c262e40e4af238fc79d9cac3999\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1cf227c41856e35ed5433f312c767cf4257aca2189bf3a2a00300b795ea3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53a98bcc57f7bbebdf42ab42df3e36025470222995ffad1e72a695551baca5a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4f92886c66de01a36e7f4d790ad79dd42fe707f9ae75c17beb9e4eaa16aef03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c1ee99812ca8b5fb60d1dbb08b20f074c2fc46f9fe9b0d1ac8d5109e6dfd27c7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.343205 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b30d9e13c42d721c97eb9d535bbf0cee15e3c5fc1181ef4d03779de96bcbfc1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.354251 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5bgh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea88ea7e-f678-42eb-9a92-ccc0a32f096e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff401b91a77e6b0582c265e96a1ae56ece497000a2d63b0c888bf9dcc3017d0a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtz4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5bgh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.373521 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:20Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-04T15:37:21Z\\\",\\\"message\\\":\\\"c64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {7e8bb06a-06a5-45bc-a752-26a17d322811}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1204 15:37:21.117433 6930 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1204 15:37:21.117498 6930 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-04T15:37:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:36:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:36:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4nxwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qzptn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.387615 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abc34179-1681-4d1e-9bca-55096396bb50\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aaaf49bce09d82e1fc34d8ef8c86a903601601f5d1a555220e5a730482d7d334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2b28bf9ceeb120d7f466593539343aa9d85e2862248911e6cbbc85e6f3f7e7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d1d60c5a2185961dc2a271e4c87e8d464d0927239705658a6d4036c48c30f95\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.400373 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c0f020a1-3429-4db0-9426-c21895bedd1c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://963a0243e0fc1bae361a187173501210b2d41a84bb276cfcec39a4f69935422c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff40aaba3c837beb7616b04aa66175ec42dce22b95deab03efddebffefaa0746\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.405461 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.405764 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.405833 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.405864 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.405925 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.415143 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3829285d-c049-4d27-b390-5d88c407bd0f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.430770 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.444990 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63cca643-a7db-4c46-a8eb-350b469d17f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7128fe016ac9ff0bb5e07a5a1bacc4ff7983549b27cf3464616ecea60c96bda4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23007eeb87d51bfe2fc225b848503f281e413c8daae7069d54db4d902d29c82b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:36:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wxg2r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:36:32Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-prhdp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.457196 4878 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8cac2050-844a-4631-bf62-0b1a173113e4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:36:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-04T15:35:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d52a16f3b3bdc69216ea5e1584922acc663013ea0b328d63130d5ea4c8065ef8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a3c3fe00353b5426004ee0950fbd9e92dc562e46ea545dc1a3431d939d60461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15aba5071006681ff885ddf9b0cb34494b9563a88c1a126cfe7695d1be935845\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-04T15:35:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://97fb5c99932d57d01eec1d983ab1c9d2730585511ef9515dd2d1e513ff1f3639\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-04T15:35:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-04T15:35:58Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-04T15:35:57Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:27Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.509316 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.509382 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.509393 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.509413 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.509424 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.612496 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.612551 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.612561 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.612579 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.612589 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.715676 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.715748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.715763 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.715792 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.715812 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.818363 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.818440 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.818466 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.818489 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.818506 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.921476 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.921528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.921543 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.921562 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:27 crc kubenswrapper[4878]: I1204 15:37:27.921575 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:27Z","lastTransitionTime":"2025-12-04T15:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.024104 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.024200 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.024214 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.024236 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.024252 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.127450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.127503 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.127515 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.127540 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.127554 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.179074 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.179178 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:28 crc kubenswrapper[4878]: E1204 15:37:28.179264 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:28 crc kubenswrapper[4878]: E1204 15:37:28.179381 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.230788 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.230848 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.230862 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.230906 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.230922 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.334163 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.334223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.334240 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.334270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.334283 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.437401 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.437453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.437465 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.437487 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.437498 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.540553 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.540612 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.540623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.540653 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.540665 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.643356 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.643416 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.643434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.643456 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.643468 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.746958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.747033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.747043 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.747074 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.747084 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.850133 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.850199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.850212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.850231 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.850588 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.954060 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.954103 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.954115 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.954133 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:28 crc kubenswrapper[4878]: I1204 15:37:28.954143 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:28Z","lastTransitionTime":"2025-12-04T15:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.056907 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.056945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.056956 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.057009 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.057019 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.160087 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.160361 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.160439 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.160539 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.160642 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.179558 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.179722 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.180072 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.180311 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.263261 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.263330 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.263348 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.263378 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.263396 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.366140 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.366185 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.366198 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.366217 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.366231 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.468962 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.469024 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.469037 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.469054 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.469066 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.571959 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.572011 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.572023 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.572041 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.572052 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.674538 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.674590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.674600 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.674616 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.674628 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.777237 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.777303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.777324 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.777345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.777356 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.880252 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.880332 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.880345 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.880365 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.880377 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.923147 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.923193 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.923205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.923224 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.923237 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.935466 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.939482 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.939528 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.939542 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.939561 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.939574 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.951968 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.955838 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.955902 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.955920 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.955938 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.955949 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.969009 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.974106 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.974165 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.974183 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.974214 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.974233 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:29 crc kubenswrapper[4878]: E1204 15:37:29.990101 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:29Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.995860 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.995971 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.995984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.996006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:29 crc kubenswrapper[4878]: I1204 15:37:29.996021 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:29Z","lastTransitionTime":"2025-12-04T15:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: E1204 15:37:30.011259 4878 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-04T15:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"96c4f62a-170b-46e9-91e9-d7457aac55d0\\\",\\\"systemUUID\\\":\\\"1031ff9d-cccb-4da2-a988-194843f64ced\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-04T15:37:30Z is after 2025-08-24T17:21:41Z" Dec 04 15:37:30 crc kubenswrapper[4878]: E1204 15:37:30.011399 4878 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.013561 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.013634 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.013646 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.013664 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.013675 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.115713 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.116056 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.116149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.116242 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.116328 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.179304 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.179276 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:30 crc kubenswrapper[4878]: E1204 15:37:30.179641 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:30 crc kubenswrapper[4878]: E1204 15:37:30.179543 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.219669 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.219725 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.219737 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.219782 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.219795 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.323739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.323802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.323821 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.323844 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.323856 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.426650 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.427428 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.427453 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.427475 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.427486 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.530092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.530137 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.530153 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.530171 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.530183 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.634083 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.634158 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.634170 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.634190 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.634222 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.736399 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.736699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.736812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.736944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.737050 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.839549 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.840033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.840376 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.840509 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.840625 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.944425 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.944751 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.944895 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.945017 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:30 crc kubenswrapper[4878]: I1204 15:37:30.945128 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:30Z","lastTransitionTime":"2025-12-04T15:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.047369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.047422 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.047434 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.047451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.047463 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.161356 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.161389 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.161402 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.161420 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.161431 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.179200 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.179335 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:31 crc kubenswrapper[4878]: E1204 15:37:31.179453 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:31 crc kubenswrapper[4878]: E1204 15:37:31.179723 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.263836 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.263931 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.263945 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.263968 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.263985 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.366589 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.366637 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.366648 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.366664 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.366673 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.469369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.469410 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.469421 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.469443 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.469454 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.572369 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.572418 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.572429 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.572447 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.572460 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.675900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.675949 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.675958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.675982 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.676001 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.778742 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.778812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.778836 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.778863 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.778916 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.881405 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.881461 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.881476 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.881493 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.881507 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.985946 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.986076 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.986089 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.986130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:31 crc kubenswrapper[4878]: I1204 15:37:31.986144 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:31Z","lastTransitionTime":"2025-12-04T15:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.088812 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.088917 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.088928 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.088963 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.088974 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.179563 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.179671 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:32 crc kubenswrapper[4878]: E1204 15:37:32.179846 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:32 crc kubenswrapper[4878]: E1204 15:37:32.180030 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.191883 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.191958 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.191967 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.191983 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.191996 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.294275 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.294342 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.294358 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.294377 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.294390 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.397732 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.397776 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.397792 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.397810 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.397823 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.501314 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.501371 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.501383 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.501432 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.501486 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.604825 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.604946 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.604961 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.604982 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.605000 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.707451 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.707520 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.707534 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.707558 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.707574 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.814574 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.814649 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.814662 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.814686 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.814706 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.917959 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.918092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.918111 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.918136 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:32 crc kubenswrapper[4878]: I1204 15:37:32.918165 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:32Z","lastTransitionTime":"2025-12-04T15:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.023143 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.023199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.023212 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.023229 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.023241 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.127003 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.127064 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.127077 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.127097 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.127116 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.179592 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.179592 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:33 crc kubenswrapper[4878]: E1204 15:37:33.180020 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:33 crc kubenswrapper[4878]: E1204 15:37:33.180151 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.230720 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.230777 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.230788 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.230806 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.230821 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.333746 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.333792 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.333803 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.333822 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.333835 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.437149 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.437199 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.437209 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.437233 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.437244 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.539849 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.539979 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.539990 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.540006 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.540018 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.642154 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.642223 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.642234 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.642270 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.642285 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.745221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.745278 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.745291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.745313 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.745328 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.848584 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.848672 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.848683 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.848699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.848713 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.951551 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.951590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.951600 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.951615 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:33 crc kubenswrapper[4878]: I1204 15:37:33.951624 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:33Z","lastTransitionTime":"2025-12-04T15:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.054789 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.054830 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.054838 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.054853 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.054864 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.157741 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.157781 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.157790 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.157807 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.157817 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.178567 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:34 crc kubenswrapper[4878]: E1204 15:37:34.178721 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.178739 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:34 crc kubenswrapper[4878]: E1204 15:37:34.179002 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.179335 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:37:34 crc kubenswrapper[4878]: E1204 15:37:34.179509 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.261036 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.261082 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.261094 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.261113 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.261126 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.364488 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.364527 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.364544 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.364559 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.364569 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.467469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.467526 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.467539 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.467561 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.467577 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.571195 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.571260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.571275 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.571297 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.571313 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.674354 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.674450 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.674469 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.674496 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.674517 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.777082 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.777155 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.777174 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.777205 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.777223 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.879665 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.880003 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.880015 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.880033 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.880047 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.983083 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.983121 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.983130 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.983145 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:34 crc kubenswrapper[4878]: I1204 15:37:34.983156 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:34Z","lastTransitionTime":"2025-12-04T15:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.086777 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.086835 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.086850 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.086900 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.086919 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.180123 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.180180 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:35 crc kubenswrapper[4878]: E1204 15:37:35.180316 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:35 crc kubenswrapper[4878]: E1204 15:37:35.180569 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.189203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.189260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.189273 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.189292 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.189306 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.292834 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.292909 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.292923 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.292944 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.292956 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.396624 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.396685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.396701 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.396725 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.396745 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.499620 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.499670 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.499682 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.499700 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.499710 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.601867 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.601957 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.601973 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.601998 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.602018 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.705248 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.705294 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.705303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.705320 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.705331 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.807624 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.807704 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.807726 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.807759 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.807782 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.911072 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.911154 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.911177 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.911203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:35 crc kubenswrapper[4878]: I1204 15:37:35.911219 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:35Z","lastTransitionTime":"2025-12-04T15:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.013531 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.013579 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.013590 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.013608 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.013621 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.116636 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.116685 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.116699 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.116722 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.116734 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.179097 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.179120 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:36 crc kubenswrapper[4878]: E1204 15:37:36.179257 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:36 crc kubenswrapper[4878]: E1204 15:37:36.179502 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.219659 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.219717 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.219728 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.219750 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.219763 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.323004 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.323051 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.323064 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.323081 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.323094 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.426513 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.426568 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.426579 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.426595 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.426607 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.530160 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.530221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.530232 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.530252 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.530265 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.634059 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.634131 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.634148 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.634175 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.634199 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.737236 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.737279 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.737287 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.737301 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.737311 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.840257 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.840324 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.840351 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.840379 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.840399 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.943739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.943786 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.943799 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.943818 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:36 crc kubenswrapper[4878]: I1204 15:37:36.943831 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:36Z","lastTransitionTime":"2025-12-04T15:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.046939 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.047026 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.047041 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.047059 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.047072 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.150730 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.150802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.150820 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.150847 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.150864 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.179093 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.179166 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:37 crc kubenswrapper[4878]: E1204 15:37:37.179583 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:37 crc kubenswrapper[4878]: E1204 15:37:37.180079 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.244523 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=51.244476699 podStartE2EDuration="51.244476699s" podCreationTimestamp="2025-12-04 15:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.243947096 +0000 UTC m=+101.206484062" watchObservedRunningTime="2025-12-04 15:37:37.244476699 +0000 UTC m=+101.207013665" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.244958 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-prhdp" podStartSLOduration=77.244952162 podStartE2EDuration="1m17.244952162s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.224212342 +0000 UTC m=+101.186749308" watchObservedRunningTime="2025-12-04 15:37:37.244952162 +0000 UTC m=+101.207489118" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.252721 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.252795 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.252832 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.252855 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.252898 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.263220 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.263206108 podStartE2EDuration="16.263206108s" podCreationTimestamp="2025-12-04 15:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.262933051 +0000 UTC m=+101.225470007" watchObservedRunningTime="2025-12-04 15:37:37.263206108 +0000 UTC m=+101.225743064" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.309467 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.309439869 podStartE2EDuration="1m22.309439869s" podCreationTimestamp="2025-12-04 15:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.285704652 +0000 UTC m=+101.248241618" watchObservedRunningTime="2025-12-04 15:37:37.309439869 +0000 UTC m=+101.271976815" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.329312 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9p8p7" podStartSLOduration=78.329291326 podStartE2EDuration="1m18.329291326s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.314059157 +0000 UTC m=+101.276596113" watchObservedRunningTime="2025-12-04 15:37:37.329291326 +0000 UTC m=+101.291828282" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.356035 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.356092 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.356105 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.356124 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.356136 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.425975 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podStartSLOduration=78.425951565 podStartE2EDuration="1m18.425951565s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.407715319 +0000 UTC m=+101.370252275" watchObservedRunningTime="2025-12-04 15:37:37.425951565 +0000 UTC m=+101.388488521" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.426274 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6rrvz" podStartSLOduration=78.426270783 podStartE2EDuration="1m18.426270783s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.425858332 +0000 UTC m=+101.388395288" watchObservedRunningTime="2025-12-04 15:37:37.426270783 +0000 UTC m=+101.388807739" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.458656 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.458689 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.458708 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.458725 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.458737 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.468354 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=18.468332277000002 podStartE2EDuration="18.468332277s" podCreationTimestamp="2025-12-04 15:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.468132002 +0000 UTC m=+101.430668968" watchObservedRunningTime="2025-12-04 15:37:37.468332277 +0000 UTC m=+101.430869233" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.536523 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xrkl9" podStartSLOduration=78.536504069 podStartE2EDuration="1m18.536504069s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.507472957 +0000 UTC m=+101.470009923" watchObservedRunningTime="2025-12-04 15:37:37.536504069 +0000 UTC m=+101.499041025" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.551398 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.551374379 podStartE2EDuration="1m22.551374379s" podCreationTimestamp="2025-12-04 15:36:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.55063572 +0000 UTC m=+101.513172676" watchObservedRunningTime="2025-12-04 15:37:37.551374379 +0000 UTC m=+101.513911335" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.561443 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.561546 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.561556 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.561578 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.561589 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.664151 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.664192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.664203 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.664221 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.664233 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.767902 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.767948 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.767962 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.767984 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.767998 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.872129 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.872178 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.872189 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.872210 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.872223 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.974753 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.975106 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.975169 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.975244 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:37 crc kubenswrapper[4878]: I1204 15:37:37.975307 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:37Z","lastTransitionTime":"2025-12-04T15:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.065404 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:38 crc kubenswrapper[4878]: E1204 15:37:38.065627 4878 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:37:38 crc kubenswrapper[4878]: E1204 15:37:38.065751 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs podName:ab155c5e-9187-4276-98c7-20c0d7e35f4b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.065722357 +0000 UTC m=+166.028259383 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs") pod "network-metrics-daemon-k9k9q" (UID: "ab155c5e-9187-4276-98c7-20c0d7e35f4b") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.078146 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.078252 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.078272 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.078290 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.078300 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.179487 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.179522 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:38 crc kubenswrapper[4878]: E1204 15:37:38.179658 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:38 crc kubenswrapper[4878]: E1204 15:37:38.179729 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.181192 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.181247 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.181260 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.181276 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.181289 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.283570 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.283620 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.283636 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.283655 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.283668 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.386763 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.386814 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.386826 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.386845 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.386859 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.489583 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.489623 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.489632 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.489652 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.489665 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.592744 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.592794 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.592807 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.592824 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.592836 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.695070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.695118 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.695131 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.695155 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.695168 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.797642 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.797682 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.797693 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.797710 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.797721 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.900675 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.900719 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.900729 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.900748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:38 crc kubenswrapper[4878]: I1204 15:37:38.900759 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:38Z","lastTransitionTime":"2025-12-04T15:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.038705 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.038748 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.038758 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.038779 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.038793 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.141747 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.142070 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.142159 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.142238 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.142316 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.179514 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:39 crc kubenswrapper[4878]: E1204 15:37:39.180051 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.179570 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:39 crc kubenswrapper[4878]: E1204 15:37:39.180332 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.245279 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.245555 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.245646 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.245738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.245824 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.348739 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.348800 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.348815 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.348835 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.348847 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.452291 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.452620 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.452711 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.452828 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.452948 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.555303 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.555626 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.555718 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.555806 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.555952 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.658997 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.659048 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.659066 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.659087 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.659099 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.762244 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.762296 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.762308 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.762327 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.762340 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.865254 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.865554 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.865625 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.865697 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.865757 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.968817 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.968901 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.968914 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.968933 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:39 crc kubenswrapper[4878]: I1204 15:37:39.968950 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:39Z","lastTransitionTime":"2025-12-04T15:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.072022 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.072085 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.072096 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.072118 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.072133 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:40Z","lastTransitionTime":"2025-12-04T15:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.090738 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.090789 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.090802 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.090827 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.090841 4878 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-04T15:37:40Z","lastTransitionTime":"2025-12-04T15:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.146163 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5bgh4" podStartSLOduration=81.146135348 podStartE2EDuration="1m21.146135348s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:37.578549553 +0000 UTC m=+101.541086509" watchObservedRunningTime="2025-12-04 15:37:40.146135348 +0000 UTC m=+104.108672304" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.146418 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx"] Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.146923 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.149614 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.149686 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.150141 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.150655 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.180004 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:40 crc kubenswrapper[4878]: E1204 15:37:40.180193 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.180195 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:40 crc kubenswrapper[4878]: E1204 15:37:40.180292 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.190688 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.190830 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.190964 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a7c0f1d-15c3-4490-b9db-472790536260-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.191070 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7c0f1d-15c3-4490-b9db-472790536260-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.191187 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7c0f1d-15c3-4490-b9db-472790536260-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.292631 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7c0f1d-15c3-4490-b9db-472790536260-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.292748 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.292778 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.292806 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a7c0f1d-15c3-4490-b9db-472790536260-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.292835 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7c0f1d-15c3-4490-b9db-472790536260-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.293204 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.293293 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3a7c0f1d-15c3-4490-b9db-472790536260-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.294068 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3a7c0f1d-15c3-4490-b9db-472790536260-service-ca\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.302609 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a7c0f1d-15c3-4490-b9db-472790536260-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.310772 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3a7c0f1d-15c3-4490-b9db-472790536260-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-t7zhx\" (UID: \"3a7c0f1d-15c3-4490-b9db-472790536260\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.464477 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.726034 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" event={"ID":"3a7c0f1d-15c3-4490-b9db-472790536260","Type":"ContainerStarted","Data":"f297a5481f798c517bdf8288e040773f5326ae78e4c0a00f5153259894a160d7"} Dec 04 15:37:40 crc kubenswrapper[4878]: I1204 15:37:40.726100 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" event={"ID":"3a7c0f1d-15c3-4490-b9db-472790536260","Type":"ContainerStarted","Data":"5e510a85dc2db83e798c8d5212d14d40f12adc6b68f543e75458ae80c763c3ed"} Dec 04 15:37:41 crc kubenswrapper[4878]: I1204 15:37:41.179661 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:41 crc kubenswrapper[4878]: I1204 15:37:41.179787 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:41 crc kubenswrapper[4878]: E1204 15:37:41.179863 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:41 crc kubenswrapper[4878]: E1204 15:37:41.179953 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:42 crc kubenswrapper[4878]: I1204 15:37:42.179311 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:42 crc kubenswrapper[4878]: I1204 15:37:42.179345 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:42 crc kubenswrapper[4878]: E1204 15:37:42.179536 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:42 crc kubenswrapper[4878]: E1204 15:37:42.179600 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:43 crc kubenswrapper[4878]: I1204 15:37:43.179603 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:43 crc kubenswrapper[4878]: I1204 15:37:43.179629 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:43 crc kubenswrapper[4878]: E1204 15:37:43.179814 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:43 crc kubenswrapper[4878]: E1204 15:37:43.179895 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:44 crc kubenswrapper[4878]: I1204 15:37:44.179089 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:44 crc kubenswrapper[4878]: I1204 15:37:44.179139 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:44 crc kubenswrapper[4878]: E1204 15:37:44.179517 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:44 crc kubenswrapper[4878]: E1204 15:37:44.179711 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:45 crc kubenswrapper[4878]: I1204 15:37:45.179078 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:45 crc kubenswrapper[4878]: E1204 15:37:45.179467 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:45 crc kubenswrapper[4878]: I1204 15:37:45.179132 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:45 crc kubenswrapper[4878]: E1204 15:37:45.180209 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:46 crc kubenswrapper[4878]: I1204 15:37:46.179565 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:46 crc kubenswrapper[4878]: I1204 15:37:46.179681 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:46 crc kubenswrapper[4878]: E1204 15:37:46.180446 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:46 crc kubenswrapper[4878]: E1204 15:37:46.180263 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:47 crc kubenswrapper[4878]: I1204 15:37:47.178771 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:47 crc kubenswrapper[4878]: I1204 15:37:47.179187 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:47 crc kubenswrapper[4878]: E1204 15:37:47.179979 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:47 crc kubenswrapper[4878]: E1204 15:37:47.180105 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:48 crc kubenswrapper[4878]: I1204 15:37:48.178967 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:48 crc kubenswrapper[4878]: E1204 15:37:48.179167 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:48 crc kubenswrapper[4878]: I1204 15:37:48.180090 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:48 crc kubenswrapper[4878]: E1204 15:37:48.180262 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:49 crc kubenswrapper[4878]: I1204 15:37:49.179203 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:49 crc kubenswrapper[4878]: I1204 15:37:49.179387 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:49 crc kubenswrapper[4878]: E1204 15:37:49.179905 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:49 crc kubenswrapper[4878]: E1204 15:37:49.180038 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:49 crc kubenswrapper[4878]: I1204 15:37:49.180180 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:37:49 crc kubenswrapper[4878]: E1204 15:37:49.180384 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:37:50 crc kubenswrapper[4878]: I1204 15:37:50.178798 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:50 crc kubenswrapper[4878]: E1204 15:37:50.178966 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:50 crc kubenswrapper[4878]: I1204 15:37:50.179191 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:50 crc kubenswrapper[4878]: E1204 15:37:50.179365 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:51 crc kubenswrapper[4878]: I1204 15:37:51.179699 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:51 crc kubenswrapper[4878]: I1204 15:37:51.179761 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:51 crc kubenswrapper[4878]: E1204 15:37:51.180159 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:51 crc kubenswrapper[4878]: E1204 15:37:51.180213 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:52 crc kubenswrapper[4878]: I1204 15:37:52.178990 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:52 crc kubenswrapper[4878]: I1204 15:37:52.179130 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:52 crc kubenswrapper[4878]: E1204 15:37:52.179167 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:52 crc kubenswrapper[4878]: E1204 15:37:52.179404 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:53 crc kubenswrapper[4878]: I1204 15:37:53.178777 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:53 crc kubenswrapper[4878]: I1204 15:37:53.178932 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:53 crc kubenswrapper[4878]: E1204 15:37:53.178969 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:53 crc kubenswrapper[4878]: E1204 15:37:53.179265 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.179241 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.179297 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:54 crc kubenswrapper[4878]: E1204 15:37:54.179440 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:54 crc kubenswrapper[4878]: E1204 15:37:54.179631 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.772490 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/1.log" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.772982 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/0.log" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.773045 4878 generic.go:334] "Generic (PLEG): container finished" podID="c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757" containerID="9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0" exitCode=1 Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.773128 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerDied","Data":"9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0"} Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.773293 4878 scope.go:117] "RemoveContainer" containerID="b63a854a30c54b867c4bf74a358ee00099309eb0d3e4fe752b5eb56fddab4ea1" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.774433 4878 scope.go:117] "RemoveContainer" containerID="9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0" Dec 04 15:37:54 crc kubenswrapper[4878]: E1204 15:37:54.775213 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9p8p7_openshift-multus(c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757)\"" pod="openshift-multus/multus-9p8p7" podUID="c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757" Dec 04 15:37:54 crc kubenswrapper[4878]: I1204 15:37:54.794188 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-t7zhx" podStartSLOduration=95.79416527 podStartE2EDuration="1m35.79416527s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:37:40.743063745 +0000 UTC m=+104.705600701" watchObservedRunningTime="2025-12-04 15:37:54.79416527 +0000 UTC m=+118.756702226" Dec 04 15:37:55 crc kubenswrapper[4878]: I1204 15:37:55.179382 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:55 crc kubenswrapper[4878]: I1204 15:37:55.179384 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:55 crc kubenswrapper[4878]: E1204 15:37:55.179595 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:55 crc kubenswrapper[4878]: E1204 15:37:55.179813 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:55 crc kubenswrapper[4878]: I1204 15:37:55.778721 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/1.log" Dec 04 15:37:56 crc kubenswrapper[4878]: I1204 15:37:56.179518 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:56 crc kubenswrapper[4878]: I1204 15:37:56.179567 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:56 crc kubenswrapper[4878]: E1204 15:37:56.179693 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:56 crc kubenswrapper[4878]: E1204 15:37:56.179834 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:57 crc kubenswrapper[4878]: E1204 15:37:57.170797 4878 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 04 15:37:57 crc kubenswrapper[4878]: I1204 15:37:57.178671 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:57 crc kubenswrapper[4878]: I1204 15:37:57.178762 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:57 crc kubenswrapper[4878]: E1204 15:37:57.180048 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:37:57 crc kubenswrapper[4878]: E1204 15:37:57.180238 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:57 crc kubenswrapper[4878]: E1204 15:37:57.272157 4878 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 15:37:58 crc kubenswrapper[4878]: I1204 15:37:58.179665 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:37:58 crc kubenswrapper[4878]: E1204 15:37:58.179926 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:37:58 crc kubenswrapper[4878]: I1204 15:37:58.180157 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:37:58 crc kubenswrapper[4878]: E1204 15:37:58.180980 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:37:59 crc kubenswrapper[4878]: I1204 15:37:59.179805 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:37:59 crc kubenswrapper[4878]: I1204 15:37:59.179850 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:37:59 crc kubenswrapper[4878]: E1204 15:37:59.180029 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:37:59 crc kubenswrapper[4878]: E1204 15:37:59.180156 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:00 crc kubenswrapper[4878]: I1204 15:38:00.178918 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:00 crc kubenswrapper[4878]: I1204 15:38:00.179007 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:00 crc kubenswrapper[4878]: E1204 15:38:00.179095 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:00 crc kubenswrapper[4878]: E1204 15:38:00.179215 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:01 crc kubenswrapper[4878]: I1204 15:38:01.179180 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:01 crc kubenswrapper[4878]: I1204 15:38:01.179196 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:01 crc kubenswrapper[4878]: E1204 15:38:01.179401 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:01 crc kubenswrapper[4878]: E1204 15:38:01.179994 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:01 crc kubenswrapper[4878]: I1204 15:38:01.180421 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:38:01 crc kubenswrapper[4878]: E1204 15:38:01.181401 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qzptn_openshift-ovn-kubernetes(5b6e8498-be44-4b9c-9dd3-dc08f9515f2e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" Dec 04 15:38:02 crc kubenswrapper[4878]: I1204 15:38:02.178997 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:02 crc kubenswrapper[4878]: I1204 15:38:02.179010 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:02 crc kubenswrapper[4878]: E1204 15:38:02.179283 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:02 crc kubenswrapper[4878]: E1204 15:38:02.179462 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:02 crc kubenswrapper[4878]: E1204 15:38:02.273475 4878 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 15:38:03 crc kubenswrapper[4878]: I1204 15:38:03.178657 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:03 crc kubenswrapper[4878]: I1204 15:38:03.178717 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:03 crc kubenswrapper[4878]: E1204 15:38:03.178815 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:03 crc kubenswrapper[4878]: E1204 15:38:03.179006 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:04 crc kubenswrapper[4878]: I1204 15:38:04.178655 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:04 crc kubenswrapper[4878]: I1204 15:38:04.178722 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:04 crc kubenswrapper[4878]: E1204 15:38:04.179734 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:04 crc kubenswrapper[4878]: E1204 15:38:04.179891 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:05 crc kubenswrapper[4878]: I1204 15:38:05.179487 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:05 crc kubenswrapper[4878]: E1204 15:38:05.181464 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:05 crc kubenswrapper[4878]: I1204 15:38:05.179545 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:05 crc kubenswrapper[4878]: E1204 15:38:05.182393 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:06 crc kubenswrapper[4878]: I1204 15:38:06.179342 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:06 crc kubenswrapper[4878]: I1204 15:38:06.179391 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:06 crc kubenswrapper[4878]: E1204 15:38:06.179764 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:06 crc kubenswrapper[4878]: E1204 15:38:06.180427 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:07 crc kubenswrapper[4878]: I1204 15:38:07.178972 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:07 crc kubenswrapper[4878]: E1204 15:38:07.180645 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:07 crc kubenswrapper[4878]: I1204 15:38:07.180734 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:07 crc kubenswrapper[4878]: E1204 15:38:07.180961 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:07 crc kubenswrapper[4878]: E1204 15:38:07.274310 4878 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 15:38:08 crc kubenswrapper[4878]: I1204 15:38:08.178989 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:08 crc kubenswrapper[4878]: I1204 15:38:08.179097 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:08 crc kubenswrapper[4878]: E1204 15:38:08.180134 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:08 crc kubenswrapper[4878]: E1204 15:38:08.180244 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:09 crc kubenswrapper[4878]: I1204 15:38:09.178934 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:09 crc kubenswrapper[4878]: I1204 15:38:09.179019 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:09 crc kubenswrapper[4878]: E1204 15:38:09.179775 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:09 crc kubenswrapper[4878]: I1204 15:38:09.179498 4878 scope.go:117] "RemoveContainer" containerID="9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0" Dec 04 15:38:09 crc kubenswrapper[4878]: E1204 15:38:09.179962 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:10 crc kubenswrapper[4878]: I1204 15:38:10.179203 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:10 crc kubenswrapper[4878]: E1204 15:38:10.179352 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:10 crc kubenswrapper[4878]: I1204 15:38:10.179621 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:10 crc kubenswrapper[4878]: E1204 15:38:10.179685 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:10 crc kubenswrapper[4878]: I1204 15:38:10.828140 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/1.log" Dec 04 15:38:10 crc kubenswrapper[4878]: I1204 15:38:10.828202 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerStarted","Data":"b7abb0abe7f56ff1bdcd8c17582bd214dee727f1f4d519f3197514b6c583a0ad"} Dec 04 15:38:11 crc kubenswrapper[4878]: I1204 15:38:11.179550 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:11 crc kubenswrapper[4878]: I1204 15:38:11.179594 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:11 crc kubenswrapper[4878]: E1204 15:38:11.179754 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:11 crc kubenswrapper[4878]: E1204 15:38:11.179857 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:12 crc kubenswrapper[4878]: I1204 15:38:12.179046 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:12 crc kubenswrapper[4878]: I1204 15:38:12.179046 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:12 crc kubenswrapper[4878]: E1204 15:38:12.179227 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:12 crc kubenswrapper[4878]: E1204 15:38:12.179284 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:12 crc kubenswrapper[4878]: E1204 15:38:12.275512 4878 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 15:38:13 crc kubenswrapper[4878]: I1204 15:38:13.179153 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:13 crc kubenswrapper[4878]: I1204 15:38:13.179151 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:13 crc kubenswrapper[4878]: E1204 15:38:13.179296 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:13 crc kubenswrapper[4878]: E1204 15:38:13.179465 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.179016 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.179173 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:14 crc kubenswrapper[4878]: E1204 15:38:14.179201 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:14 crc kubenswrapper[4878]: E1204 15:38:14.179705 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.180445 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.850221 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/3.log" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.855318 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerStarted","Data":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.856064 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:38:14 crc kubenswrapper[4878]: I1204 15:38:14.899222 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podStartSLOduration=115.899194534 podStartE2EDuration="1m55.899194534s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:14.898610169 +0000 UTC m=+138.861147145" watchObservedRunningTime="2025-12-04 15:38:14.899194534 +0000 UTC m=+138.861731510" Dec 04 15:38:15 crc kubenswrapper[4878]: I1204 15:38:15.178889 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:15 crc kubenswrapper[4878]: I1204 15:38:15.178915 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:15 crc kubenswrapper[4878]: E1204 15:38:15.179045 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:15 crc kubenswrapper[4878]: E1204 15:38:15.179067 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:15 crc kubenswrapper[4878]: I1204 15:38:15.422345 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k9k9q"] Dec 04 15:38:15 crc kubenswrapper[4878]: I1204 15:38:15.422525 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:15 crc kubenswrapper[4878]: E1204 15:38:15.422642 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:16 crc kubenswrapper[4878]: I1204 15:38:16.179337 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:16 crc kubenswrapper[4878]: E1204 15:38:16.179807 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:17 crc kubenswrapper[4878]: I1204 15:38:17.179448 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:17 crc kubenswrapper[4878]: I1204 15:38:17.179446 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:17 crc kubenswrapper[4878]: E1204 15:38:17.180503 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:17 crc kubenswrapper[4878]: I1204 15:38:17.180612 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:17 crc kubenswrapper[4878]: E1204 15:38:17.180691 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:17 crc kubenswrapper[4878]: E1204 15:38:17.180776 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:17 crc kubenswrapper[4878]: E1204 15:38:17.275949 4878 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 04 15:38:18 crc kubenswrapper[4878]: I1204 15:38:18.179082 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:18 crc kubenswrapper[4878]: E1204 15:38:18.179278 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:19 crc kubenswrapper[4878]: I1204 15:38:19.179349 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:19 crc kubenswrapper[4878]: I1204 15:38:19.179413 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:19 crc kubenswrapper[4878]: I1204 15:38:19.179553 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:19 crc kubenswrapper[4878]: E1204 15:38:19.179720 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:19 crc kubenswrapper[4878]: E1204 15:38:19.179811 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:19 crc kubenswrapper[4878]: E1204 15:38:19.179920 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:20 crc kubenswrapper[4878]: I1204 15:38:20.179263 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:20 crc kubenswrapper[4878]: E1204 15:38:20.179426 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:21 crc kubenswrapper[4878]: I1204 15:38:21.179536 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:21 crc kubenswrapper[4878]: I1204 15:38:21.179597 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:21 crc kubenswrapper[4878]: I1204 15:38:21.179582 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:21 crc kubenswrapper[4878]: E1204 15:38:21.179765 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:38:21 crc kubenswrapper[4878]: E1204 15:38:21.179921 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 04 15:38:21 crc kubenswrapper[4878]: E1204 15:38:21.180099 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-k9k9q" podUID="ab155c5e-9187-4276-98c7-20c0d7e35f4b" Dec 04 15:38:22 crc kubenswrapper[4878]: I1204 15:38:22.178984 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:22 crc kubenswrapper[4878]: E1204 15:38:22.179148 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.175423 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:23 crc kubenswrapper[4878]: E1204 15:38:23.175643 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:40:25.175605459 +0000 UTC m=+269.138142415 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.176002 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.176046 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:23 crc kubenswrapper[4878]: E1204 15:38:23.176173 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:38:23 crc kubenswrapper[4878]: E1204 15:38:23.176243 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:38:23 crc kubenswrapper[4878]: E1204 15:38:23.176258 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:40:25.176235555 +0000 UTC m=+269.138772511 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 04 15:38:23 crc kubenswrapper[4878]: E1204 15:38:23.176331 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:40:25.176285067 +0000 UTC m=+269.138822123 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.179400 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.179550 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.179411 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.185310 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.185725 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.185975 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.186140 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.186281 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.186698 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.277847 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.277962 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.284442 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.284483 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:23 crc kubenswrapper[4878]: I1204 15:38:23.509739 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 04 15:38:24 crc kubenswrapper[4878]: I1204 15:38:24.178640 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:24 crc kubenswrapper[4878]: I1204 15:38:24.194929 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:24 crc kubenswrapper[4878]: W1204 15:38:24.571325 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-4243f6b34fe02a83cb3f49186bcdb0e8fa5f9447ba26988e5351c8bd414008b9 WatchSource:0}: Error finding container 4243f6b34fe02a83cb3f49186bcdb0e8fa5f9447ba26988e5351c8bd414008b9: Status 404 returned error can't find the container with id 4243f6b34fe02a83cb3f49186bcdb0e8fa5f9447ba26988e5351c8bd414008b9 Dec 04 15:38:24 crc kubenswrapper[4878]: I1204 15:38:24.895451 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8e99d5b6d6af237ecd0b6051e169462d1855e5592a169d8a311370feb3119701"} Dec 04 15:38:24 crc kubenswrapper[4878]: I1204 15:38:24.897659 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4243f6b34fe02a83cb3f49186bcdb0e8fa5f9447ba26988e5351c8bd414008b9"} Dec 04 15:38:26 crc kubenswrapper[4878]: I1204 15:38:26.906277 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"90ba6fb4bb7efb2c7d634c368870b934d55dcbe88e7b4a32eba6e0c7f23077e9"} Dec 04 15:38:26 crc kubenswrapper[4878]: I1204 15:38:26.910221 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"15411592fedf802fa2cb802f33dbd413d703c5ea012fdd29d33055bd446e1298"} Dec 04 15:38:26 crc kubenswrapper[4878]: I1204 15:38:26.910748 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:38:27 crc kubenswrapper[4878]: I1204 15:38:27.462894 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:38:30 crc kubenswrapper[4878]: I1204 15:38:30.841016 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:38:30 crc kubenswrapper[4878]: I1204 15:38:30.841098 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.112348 4878 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.149621 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h2w2r"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.150463 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.167909 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.168071 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.175351 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.181821 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.182577 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.183295 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.183456 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.183670 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.183825 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.183996 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184186 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184336 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184339 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184490 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184675 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184815 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.184837 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.185557 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.186049 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.187366 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.209024 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.212521 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.213716 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.213851 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.214354 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.214549 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.214681 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.215193 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8zkjc"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.215502 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.215781 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.216193 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-v4cwq"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.216421 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tcd9t"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.216637 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g9zqn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.216851 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.219733 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.220109 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.220529 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.220671 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.221225 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.221757 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.222782 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.223231 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.223478 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.223682 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.223792 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.223954 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.225795 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7mgms"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.226376 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.226664 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.227940 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.228629 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.229991 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.230065 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.230270 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.230967 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231098 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231198 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231322 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231454 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231530 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.231862 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.232300 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-j5bq2"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.232777 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.233023 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.233351 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.234207 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.234442 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.234586 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.238374 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.238808 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-drjfj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239167 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239318 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239447 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239554 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239658 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239792 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.239938 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240118 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-pwnk4"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240133 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240238 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240330 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240373 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240485 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.240491 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.242021 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.242135 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.242745 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.242936 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.243827 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.277945 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.279293 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.280783 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.280992 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.281410 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.281669 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.281873 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.282205 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.282373 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.282543 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.284306 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.284640 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.284390 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285021 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285181 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285280 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285432 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285540 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285203 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285794 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285430 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.285482 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286185 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286194 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286394 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286122 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286696 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286829 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.286841 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.307370 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.308051 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.308343 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.308451 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.308540 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.308820 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309360 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309451 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309535 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309599 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309670 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.309740 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311019 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-encryption-config\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311064 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-serving-cert\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311087 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cp5\" (UniqueName: \"kubernetes.io/projected/1fa17e12-0683-4fba-810b-fa1c10a2738f-kube-api-access-79cp5\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311118 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-auth-proxy-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311148 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ktnq\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-kube-api-access-6ktnq\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311182 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311203 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311223 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-dir\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311248 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311278 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311301 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsg5j\" (UniqueName: \"kubernetes.io/projected/bc4ac80a-bd9e-41b4-9954-219008ad570d-kube-api-access-fsg5j\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit-dir\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311347 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6dt4\" (UniqueName: \"kubernetes.io/projected/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-kube-api-access-r6dt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311384 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/620354cc-25a3-433f-9ee7-af4ed1f94827-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311407 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46626fe2-913c-4594-b04a-fce651a9924f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311433 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4ac80a-bd9e-41b4-9954-219008ad570d-serving-cert\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311457 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5zpg\" (UniqueName: \"kubernetes.io/projected/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-kube-api-access-f5zpg\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311484 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljpl\" (UniqueName: \"kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311508 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-client\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311533 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-serving-cert\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311557 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrb4w\" (UniqueName: \"kubernetes.io/projected/af8c7a67-79c2-4892-a180-ee539e48bd2b-kube-api-access-qrb4w\") pod \"downloads-7954f5f757-v4cwq\" (UID: \"af8c7a67-79c2-4892-a180-ee539e48bd2b\") " pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311585 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311610 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311634 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7qtl\" (UniqueName: \"kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311663 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c993c9c-5308-4cb6-9e94-f477625c6263-machine-approver-tls\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311688 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311718 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-serving-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311745 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311782 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311809 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-metrics-tls\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311834 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311856 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311904 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311931 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311965 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-config\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.311990 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312013 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchrd\" (UniqueName: \"kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312036 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312058 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-client\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312086 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312109 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312133 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-config\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312158 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c178d3ca-882b-4143-bad1-b648220f66c7-serving-cert\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312192 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620354cc-25a3-433f-9ee7-af4ed1f94827-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312216 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312239 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-images\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312261 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312284 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312310 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-node-pullsecrets\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312335 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312361 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-trusted-ca\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312384 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312409 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312497 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dl4t\" (UniqueName: \"kubernetes.io/projected/c178d3ca-882b-4143-bad1-b648220f66c7-kube-api-access-4dl4t\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312527 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312550 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312576 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cda9500b-96aa-457f-b588-cb2efd9f36e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312597 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312623 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c178d3ca-882b-4143-bad1-b648220f66c7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312647 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-encryption-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312675 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312706 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312753 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9vw\" (UniqueName: \"kubernetes.io/projected/cda9500b-96aa-457f-b588-cb2efd9f36e9-kube-api-access-wc9vw\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312785 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lm84\" (UniqueName: \"kubernetes.io/projected/8c993c9c-5308-4cb6-9e94-f477625c6263-kube-api-access-5lm84\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312809 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46626fe2-913c-4594-b04a-fce651a9924f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312832 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-policies\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312859 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw489\" (UniqueName: \"kubernetes.io/projected/99943e7a-151b-4129-9205-f7e78e43fd3c-kube-api-access-sw489\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312913 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-image-import-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312944 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.312975 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffg8t\" (UniqueName: \"kubernetes.io/projected/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-kube-api-access-ffg8t\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313003 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313027 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313051 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-serving-cert\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313081 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrwxf\" (UniqueName: \"kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313108 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24scl\" (UniqueName: \"kubernetes.io/projected/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-kube-api-access-24scl\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313135 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fa17e12-0683-4fba-810b-fa1c10a2738f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313167 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313190 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313219 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313248 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bs4h\" (UniqueName: \"kubernetes.io/projected/620354cc-25a3-433f-9ee7-af4ed1f94827-kube-api-access-6bs4h\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313274 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313296 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313559 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313740 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.313996 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.314152 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.314269 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.315097 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.316396 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.316433 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-config\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.317289 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.318115 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.320599 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.320918 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.321857 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgtbf"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.322258 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.322851 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.324460 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.325135 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.325363 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.325385 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.325456 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.326471 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.326579 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.327138 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.328191 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.331954 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.332685 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.333204 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.333269 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.334024 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.334687 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.335062 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.335159 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.335405 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgmhx"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.335564 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.335797 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.336310 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.339031 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.347942 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.348223 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.349694 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.349832 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.349834 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.350170 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.363118 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.363409 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.374729 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.375647 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.377130 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.389595 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tcd9t"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.393445 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.398468 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.398669 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.411635 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.414686 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h2w2r"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.415208 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417109 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620354cc-25a3-433f-9ee7-af4ed1f94827-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417158 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417185 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-images\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417232 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417257 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-node-pullsecrets\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417278 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417303 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417341 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-trusted-ca\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417367 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417393 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417425 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/199d51ae-0d72-4e64-a8eb-546c07076c21-service-ca-bundle\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417453 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cda9500b-96aa-457f-b588-cb2efd9f36e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417477 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dl4t\" (UniqueName: \"kubernetes.io/projected/c178d3ca-882b-4143-bad1-b648220f66c7-kube-api-access-4dl4t\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417502 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417525 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417552 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417577 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417596 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9vw\" (UniqueName: \"kubernetes.io/projected/cda9500b-96aa-457f-b588-cb2efd9f36e9-kube-api-access-wc9vw\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417611 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c178d3ca-882b-4143-bad1-b648220f66c7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417631 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-encryption-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417647 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417664 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lm84\" (UniqueName: \"kubernetes.io/projected/8c993c9c-5308-4cb6-9e94-f477625c6263-kube-api-access-5lm84\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417683 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46626fe2-913c-4594-b04a-fce651a9924f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417701 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-policies\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417716 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw489\" (UniqueName: \"kubernetes.io/projected/99943e7a-151b-4129-9205-f7e78e43fd3c-kube-api-access-sw489\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417732 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-image-import-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417750 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417766 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffg8t\" (UniqueName: \"kubernetes.io/projected/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-kube-api-access-ffg8t\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417783 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417797 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417811 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-serving-cert\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417828 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrwxf\" (UniqueName: \"kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417844 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fa17e12-0683-4fba-810b-fa1c10a2738f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417860 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24scl\" (UniqueName: \"kubernetes.io/projected/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-kube-api-access-24scl\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417902 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bs4h\" (UniqueName: \"kubernetes.io/projected/620354cc-25a3-433f-9ee7-af4ed1f94827-kube-api-access-6bs4h\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417921 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417938 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417953 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417969 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.417987 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-config\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418003 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418017 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418037 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf1f752c-0d46-4655-930a-c063d386b3c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418058 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-auth-proxy-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418073 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ktnq\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-kube-api-access-6ktnq\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418089 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418103 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-encryption-config\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418304 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-serving-cert\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418330 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79cp5\" (UniqueName: \"kubernetes.io/projected/1fa17e12-0683-4fba-810b-fa1c10a2738f-kube-api-access-79cp5\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418352 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418376 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418396 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-dir\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418416 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418434 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/620354cc-25a3-433f-9ee7-af4ed1f94827-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418450 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsg5j\" (UniqueName: \"kubernetes.io/projected/bc4ac80a-bd9e-41b4-9954-219008ad570d-kube-api-access-fsg5j\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418466 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit-dir\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418486 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6dt4\" (UniqueName: \"kubernetes.io/projected/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-kube-api-access-r6dt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418513 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46626fe2-913c-4594-b04a-fce651a9924f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418530 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1f752c-0d46-4655-930a-c063d386b3c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418551 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-client\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418566 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-serving-cert\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418583 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrb4w\" (UniqueName: \"kubernetes.io/projected/af8c7a67-79c2-4892-a180-ee539e48bd2b-kube-api-access-qrb4w\") pod \"downloads-7954f5f757-v4cwq\" (UID: \"af8c7a67-79c2-4892-a180-ee539e48bd2b\") " pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418600 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4ac80a-bd9e-41b4-9954-219008ad570d-serving-cert\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418615 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5zpg\" (UniqueName: \"kubernetes.io/projected/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-kube-api-access-f5zpg\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418632 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljpl\" (UniqueName: \"kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418650 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c993c9c-5308-4cb6-9e94-f477625c6263-machine-approver-tls\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418669 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418689 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418705 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418721 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7qtl\" (UniqueName: \"kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418737 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-serving-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418755 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-default-certificate\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418775 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418792 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-metrics-tls\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418812 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418831 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418848 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418866 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-metrics-certs\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418906 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1f752c-0d46-4655-930a-c063d386b3c9-config\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418925 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418941 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418959 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-config\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.418975 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419001 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kchrd\" (UniqueName: \"kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419017 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419033 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-client\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419048 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419066 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ql5r\" (UniqueName: \"kubernetes.io/projected/199d51ae-0d72-4e64-a8eb-546c07076c21-kube-api-access-8ql5r\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419084 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419101 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-config\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419117 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c178d3ca-882b-4143-bad1-b648220f66c7-serving-cert\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.419135 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-stats-auth\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.423794 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.424844 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.425924 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426001 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426013 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426153 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426665 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-images\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426807 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.426865 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.427539 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c178d3ca-882b-4143-bad1-b648220f66c7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.427773 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/620354cc-25a3-433f-9ee7-af4ed1f94827-config\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.428564 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.428839 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.429383 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.429439 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-client\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.429489 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.430447 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8zkjc"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.430774 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.431147 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-trusted-ca\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.431473 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-serving-cert\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.432646 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/cda9500b-96aa-457f-b588-cb2efd9f36e9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.433524 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.433627 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-node-pullsecrets\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.433976 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-trusted-ca-bundle\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.434435 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.434844 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.434902 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.434994 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-policies\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.435598 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.435906 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.435950 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1fa17e12-0683-4fba-810b-fa1c10a2738f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.436056 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.436460 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fa17e12-0683-4fba-810b-fa1c10a2738f-config\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.436678 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.436986 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.437007 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.437074 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.437326 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.437348 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46626fe2-913c-4594-b04a-fce651a9924f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.440197 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-config\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.440643 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.438629 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.441918 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4ac80a-bd9e-41b4-9954-219008ad570d-config\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.442074 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/99943e7a-151b-4129-9205-f7e78e43fd3c-audit-dir\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.442336 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.442379 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.442612 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8c993c9c-5308-4cb6-9e94-f477625c6263-auth-proxy-config\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.443504 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.443637 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-image-import-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.444074 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.444090 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-audit-dir\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.444132 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.444600 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-etcd-serving-ca\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.444792 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.445066 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.445272 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-service-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.445480 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.445590 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.446388 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-j5bq2"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.447598 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.447623 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8c993c9c-5308-4cb6-9e94-f477625c6263-machine-approver-tls\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.447999 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.448133 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/46626fe2-913c-4594-b04a-fce651a9924f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.448344 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-encryption-config\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.448401 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4ac80a-bd9e-41b4-9954-219008ad570d-serving-cert\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.448737 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.448822 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-serving-cert\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.449209 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-serving-cert\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.449248 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c178d3ca-882b-4143-bad1-b648220f66c7-serving-cert\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.449555 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.449816 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.450330 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-encryption-config\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.451032 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.451259 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.451306 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/620354cc-25a3-433f-9ee7-af4ed1f94827-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.452625 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.452810 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-client\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.453244 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-metrics-tls\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.453678 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.455503 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v4cwq"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.456623 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.456853 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.457201 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/99943e7a-151b-4129-9205-f7e78e43fd3c-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.457823 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-hwzql"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.458983 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.459082 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g9zqn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.460227 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgtbf"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.461520 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.462822 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.463754 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.463848 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7mgms"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.466497 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.468344 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.469510 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.471368 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.471749 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-drjfj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.474331 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.478714 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.479973 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.481909 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.483353 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.485920 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.487007 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.488413 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8dtmz"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.489830 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.490808 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.491396 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.492697 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cjht8"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.495034 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgmhx"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.495157 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.495539 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.499776 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cjht8"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.504931 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8dtmz"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.508708 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-cpnks"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.510234 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.510365 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cpnks"] Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.511218 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520303 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1f752c-0d46-4655-930a-c063d386b3c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520399 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-default-certificate\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520433 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-metrics-certs\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520458 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1f752c-0d46-4655-930a-c063d386b3c9-config\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520506 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ql5r\" (UniqueName: \"kubernetes.io/projected/199d51ae-0d72-4e64-a8eb-546c07076c21-kube-api-access-8ql5r\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520530 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-stats-auth\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520568 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/199d51ae-0d72-4e64-a8eb-546c07076c21-service-ca-bundle\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.520783 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf1f752c-0d46-4655-930a-c063d386b3c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.522195 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/199d51ae-0d72-4e64-a8eb-546c07076c21-service-ca-bundle\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.524450 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-default-certificate\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.524539 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-metrics-certs\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.530961 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.539842 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/199d51ae-0d72-4e64-a8eb-546c07076c21-stats-auth\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.551429 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.571405 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.590736 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.610951 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.631160 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.650975 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.671784 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.691418 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.711574 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.731703 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.751200 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.771384 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.791502 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.812208 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.831504 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.851001 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.855376 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf1f752c-0d46-4655-930a-c063d386b3c9-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.871612 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.872361 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf1f752c-0d46-4655-930a-c063d386b3c9-config\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.891476 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.911620 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.931492 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.951522 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.972202 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 15:38:31 crc kubenswrapper[4878]: I1204 15:38:31.992511 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.011191 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.032340 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.051811 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.072270 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.111150 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.130915 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.156532 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.170996 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.191335 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.211333 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.231216 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.250955 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.271327 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.290404 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.311300 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.331622 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.349398 4878 request.go:700] Waited for 1.019832762s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.351099 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.371430 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.390787 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.411465 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.430794 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.450823 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.471256 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.491417 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.511165 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.531310 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.551428 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.571768 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.591509 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.612139 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.631252 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.651768 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.691971 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.710684 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.730962 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.752159 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.771766 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.800035 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.811741 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.832370 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.851721 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.871095 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.891492 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.910807 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.930898 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.952158 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.989276 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kchrd\" (UniqueName: \"kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd\") pod \"oauth-openshift-558db77b4-g9zqn\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:32 crc kubenswrapper[4878]: I1204 15:38:32.991545 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.016419 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.027006 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrwxf\" (UniqueName: \"kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf\") pod \"controller-manager-879f6c89f-br92t\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.031719 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.069794 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dl4t\" (UniqueName: \"kubernetes.io/projected/c178d3ca-882b-4143-bad1-b648220f66c7-kube-api-access-4dl4t\") pod \"openshift-config-operator-7777fb866f-dlt5w\" (UID: \"c178d3ca-882b-4143-bad1-b648220f66c7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.088643 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lm84\" (UniqueName: \"kubernetes.io/projected/8c993c9c-5308-4cb6-9e94-f477625c6263-kube-api-access-5lm84\") pod \"machine-approver-56656f9798-2bchj\" (UID: \"8c993c9c-5308-4cb6-9e94-f477625c6263\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.111156 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24scl\" (UniqueName: \"kubernetes.io/projected/efc42132-e5bc-4a5e-90ee-bf34ad2a286b-kube-api-access-24scl\") pod \"authentication-operator-69f744f599-8zkjc\" (UID: \"efc42132-e5bc-4a5e-90ee-bf34ad2a286b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.126056 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bs4h\" (UniqueName: \"kubernetes.io/projected/620354cc-25a3-433f-9ee7-af4ed1f94827-kube-api-access-6bs4h\") pod \"openshift-apiserver-operator-796bbdcf4f-bqxx9\" (UID: \"620354cc-25a3-433f-9ee7-af4ed1f94827\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.146024 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6dt4\" (UniqueName: \"kubernetes.io/projected/8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f-kube-api-access-r6dt4\") pod \"openshift-controller-manager-operator-756b6f6bc6-ggckn\" (UID: \"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.167761 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffg8t\" (UniqueName: \"kubernetes.io/projected/cc53410b-3bb5-45cf-aa14-ca460c71e5f0-kube-api-access-ffg8t\") pod \"dns-operator-744455d44c-j5bq2\" (UID: \"cc53410b-3bb5-45cf-aa14-ca460c71e5f0\") " pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.177216 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g9zqn"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.188840 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.190022 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljpl\" (UniqueName: \"kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl\") pod \"route-controller-manager-6576b87f9c-zvbhg\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.204500 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.210729 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.225228 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ktnq\" (UniqueName: \"kubernetes.io/projected/46626fe2-913c-4594-b04a-fce651a9924f-kube-api-access-6ktnq\") pod \"cluster-image-registry-operator-dc59b4c8b-qn9w6\" (UID: \"46626fe2-913c-4594-b04a-fce651a9924f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.245680 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsg5j\" (UniqueName: \"kubernetes.io/projected/bc4ac80a-bd9e-41b4-9954-219008ad570d-kube-api-access-fsg5j\") pod \"console-operator-58897d9998-tcd9t\" (UID: \"bc4ac80a-bd9e-41b4-9954-219008ad570d\") " pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.268296 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw489\" (UniqueName: \"kubernetes.io/projected/99943e7a-151b-4129-9205-f7e78e43fd3c-kube-api-access-sw489\") pod \"apiserver-7bbb656c7d-tvshr\" (UID: \"99943e7a-151b-4129-9205-f7e78e43fd3c\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.291997 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrb4w\" (UniqueName: \"kubernetes.io/projected/af8c7a67-79c2-4892-a180-ee539e48bd2b-kube-api-access-qrb4w\") pod \"downloads-7954f5f757-v4cwq\" (UID: \"af8c7a67-79c2-4892-a180-ee539e48bd2b\") " pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.299696 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.308502 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.309959 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79cp5\" (UniqueName: \"kubernetes.io/projected/1fa17e12-0683-4fba-810b-fa1c10a2738f-kube-api-access-79cp5\") pod \"machine-api-operator-5694c8668f-7mgms\" (UID: \"1fa17e12-0683-4fba-810b-fa1c10a2738f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.315023 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.325259 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.329486 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9vw\" (UniqueName: \"kubernetes.io/projected/cda9500b-96aa-457f-b588-cb2efd9f36e9-kube-api-access-wc9vw\") pod \"cluster-samples-operator-665b6dd947-kdhmk\" (UID: \"cda9500b-96aa-457f-b588-cb2efd9f36e9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.340375 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.344920 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.349580 4878 request.go:700] Waited for 1.90547546s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.349934 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5zpg\" (UniqueName: \"kubernetes.io/projected/9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba-kube-api-access-f5zpg\") pod \"apiserver-76f77b778f-h2w2r\" (UID: \"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba\") " pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.351136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.361165 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.367677 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.370827 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.371329 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7qtl\" (UniqueName: \"kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl\") pod \"console-f9d7485db-4x82r\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.386823 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.391493 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.411504 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.428427 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.430570 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.447574 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8zkjc"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.450757 4878 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.471773 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.491107 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.511653 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.531345 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.534355 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 15:38:33 crc kubenswrapper[4878]: W1204 15:38:33.537264 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc42132_e5bc_4a5e_90ee_bf34ad2a286b.slice/crio-78143fef0735af5163453ce4ec98a8915d19d1e5c7d1ec5bce3e9ccd7fa429b5 WatchSource:0}: Error finding container 78143fef0735af5163453ce4ec98a8915d19d1e5c7d1ec5bce3e9ccd7fa429b5: Status 404 returned error can't find the container with id 78143fef0735af5163453ce4ec98a8915d19d1e5c7d1ec5bce3e9ccd7fa429b5 Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.554786 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.571218 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.585337 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.591258 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.591666 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.596936 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.610588 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.625347 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-j5bq2"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.630386 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.661736 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf1f752c-0d46-4655-930a-c063d386b3c9-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wjfks\" (UID: \"bf1f752c-0d46-4655-930a-c063d386b3c9\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.674784 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ql5r\" (UniqueName: \"kubernetes.io/projected/199d51ae-0d72-4e64-a8eb-546c07076c21-kube-api-access-8ql5r\") pod \"router-default-5444994796-pwnk4\" (UID: \"199d51ae-0d72-4e64-a8eb-546c07076c21\") " pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.675517 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.756812 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6"] Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.934512 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" event={"ID":"0c31dded-d5e0-4f14-8de8-c4cf3ec56236","Type":"ContainerStarted","Data":"3110ee6fdfb0b28543894e7db5e43486dc078d3b11768d2a83c9cac4ce19db2d"} Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.935778 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" event={"ID":"8c993c9c-5308-4cb6-9e94-f477625c6263","Type":"ContainerStarted","Data":"5c102fdb9e8f7f2cb6e715b2beacfc846e0175ad837ae431b4e6f73fb35f92fe"} Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.937026 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" event={"ID":"efc42132-e5bc-4a5e-90ee-bf34ad2a286b","Type":"ContainerStarted","Data":"78143fef0735af5163453ce4ec98a8915d19d1e5c7d1ec5bce3e9ccd7fa429b5"} Dec 04 15:38:33 crc kubenswrapper[4878]: I1204 15:38:33.937935 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" event={"ID":"1c451d04-9071-4d89-a6aa-a26e07523cf6","Type":"ContainerStarted","Data":"ccbea1e0b0175015845d800176616a2b2e9e899f3ad79712fbee468c18c5557d"} Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.390336 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.390601 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.391189 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:34.891165971 +0000 UTC m=+158.853702947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.412275 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.413773 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.417324 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-7mgms"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.417381 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.417395 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tcd9t"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.419722 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn"] Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.443113 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod620354cc_25a3_433f_9ee7_af4ed1f94827.slice/crio-a250146b655c78d1c2049bc74845a291f0e5f9d897d99f2772cee3c2ee640189 WatchSource:0}: Error finding container a250146b655c78d1c2049bc74845a291f0e5f9d897d99f2772cee3c2ee640189: Status 404 returned error can't find the container with id a250146b655c78d1c2049bc74845a291f0e5f9d897d99f2772cee3c2ee640189 Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.446005 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46626fe2_913c_4594_b04a_fce651a9924f.slice/crio-d7c5081d20622ff5303592d930bca46bbbe28800b1c16ada73cc7ba116ff18bf WatchSource:0}: Error finding container d7c5081d20622ff5303592d930bca46bbbe28800b1c16ada73cc7ba116ff18bf: Status 404 returned error can't find the container with id d7c5081d20622ff5303592d930bca46bbbe28800b1c16ada73cc7ba116ff18bf Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.466057 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1fa17e12_0683_4fba_810b_fa1c10a2738f.slice/crio-cf9264963dbfad1ccf2769c6ae3a03b2fea328dea177d19620ea162801afa729 WatchSource:0}: Error finding container cf9264963dbfad1ccf2769c6ae3a03b2fea328dea177d19620ea162801afa729: Status 404 returned error can't find the container with id cf9264963dbfad1ccf2769c6ae3a03b2fea328dea177d19620ea162801afa729 Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.471235 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f66d4fb_9cfd_4bf5_93cb_e71ae704ff9f.slice/crio-0015b6179e3187d42cec5474be12d44e1b430d68e0482c7af27afa4782a2d21b WatchSource:0}: Error finding container 0015b6179e3187d42cec5474be12d44e1b430d68e0482c7af27afa4782a2d21b: Status 404 returned error can't find the container with id 0015b6179e3187d42cec5474be12d44e1b430d68e0482c7af27afa4782a2d21b Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.473996 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7689249c_9002_4b86_ba80_67bff6b584c4.slice/crio-0f5ce35a81193bac13a22d0b635a1f0909fc9621dcde9105294c3d6962294eb9 WatchSource:0}: Error finding container 0f5ce35a81193bac13a22d0b635a1f0909fc9621dcde9105294c3d6962294eb9: Status 404 returned error can't find the container with id 0f5ce35a81193bac13a22d0b635a1f0909fc9621dcde9105294c3d6962294eb9 Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.490612 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod199d51ae_0d72_4e64_a8eb_546c07076c21.slice/crio-1252eabd5c0e9e73e42a6d9c86cd801502b74c017acda31857b210dd29d10aed WatchSource:0}: Error finding container 1252eabd5c0e9e73e42a6d9c86cd801502b74c017acda31857b210dd29d10aed: Status 404 returned error can't find the container with id 1252eabd5c0e9e73e42a6d9c86cd801502b74c017acda31857b210dd29d10aed Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491489 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.491600 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:34.991559227 +0000 UTC m=+158.954096193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491734 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrdr\" (UniqueName: \"kubernetes.io/projected/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-kube-api-access-bmrdr\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491772 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b7f3bd6-78e8-46b2-ae10-631575d200ec-proxy-tls\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491790 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdtd9\" (UniqueName: \"kubernetes.io/projected/b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6-kube-api-access-gdtd9\") pod \"migrator-59844c95c7-qxt6b\" (UID: \"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491817 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf698\" (UniqueName: \"kubernetes.io/projected/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-kube-api-access-sf698\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491833 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-client\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491858 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-images\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491900 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491941 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr4r4\" (UniqueName: \"kubernetes.io/projected/1b7f3bd6-78e8-46b2-ae10-631575d200ec-kube-api-access-rr4r4\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491959 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491983 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvvh9\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.491998 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492025 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492041 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72hvc\" (UniqueName: \"kubernetes.io/projected/e78a5905-c297-4c98-81ae-a8a194a86c37-kube-api-access-72hvc\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492054 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-srv-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492071 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-srv-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492130 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492157 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492193 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492237 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4c64782-cd14-4c8c-b74a-4cb2616edd29-trusted-ca\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492289 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492329 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4ntp\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-kube-api-access-k4ntp\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492419 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6mxz\" (UniqueName: \"kubernetes.io/projected/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-kube-api-access-x6mxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492448 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44fcd9c6-d991-4e6a-903d-bd23c6123d47-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492501 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492523 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492581 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492621 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82f0fdc8-482e-4dde-8ecd-3607d5548331-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492644 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e78a5905-c297-4c98-81ae-a8a194a86c37-tmpfs\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492672 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8025a3f7-bab8-4787-bada-09aceb2e001b-config\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492710 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492767 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82f0fdc8-482e-4dde-8ecd-3607d5548331-proxy-tls\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492892 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fcd9c6-d991-4e6a-903d-bd23c6123d47-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492936 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492956 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44fcd9c6-d991-4e6a-903d-bd23c6123d47-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.492977 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkg2b\" (UniqueName: \"kubernetes.io/projected/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-kube-api-access-vkg2b\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493020 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lsj4\" (UniqueName: \"kubernetes.io/projected/82f0fdc8-482e-4dde-8ecd-3607d5548331-kube-api-access-8lsj4\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493041 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493115 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-cabundle\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493138 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8025a3f7-bab8-4787-bada-09aceb2e001b-serving-cert\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493269 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493297 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-config\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493335 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493461 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-key\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493490 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493512 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-service-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6wh\" (UniqueName: \"kubernetes.io/projected/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-kube-api-access-kl6wh\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493620 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493644 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-serving-cert\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493670 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-webhook-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493690 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4c64782-cd14-4c8c-b74a-4cb2616edd29-metrics-tls\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493713 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84flj\" (UniqueName: \"kubernetes.io/projected/8025a3f7-bab8-4787-bada-09aceb2e001b-kube-api-access-84flj\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493772 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493798 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcrhn\" (UniqueName: \"kubernetes.io/projected/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-kube-api-access-tcrhn\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.493928 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmr9\" (UniqueName: \"kubernetes.io/projected/835eb0a3-753e-44e0-8124-ac51072a4692-kube-api-access-rqmr9\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.503513 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.003479743 +0000 UTC m=+158.966016699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: W1204 15:38:34.510868 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99943e7a_151b_4129_9205_f7e78e43fd3c.slice/crio-e3991dd69713318c576d55ff76f1acc9e67e4ccdd5351f824aab94c7d4fee656 WatchSource:0}: Error finding container e3991dd69713318c576d55ff76f1acc9e67e4ccdd5351f824aab94c7d4fee656: Status 404 returned error can't find the container with id e3991dd69713318c576d55ff76f1acc9e67e4ccdd5351f824aab94c7d4fee656 Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595179 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595403 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595436 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-config\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595478 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595509 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595537 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acfd737-9d16-428f-b839-1f5f24a7298c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595563 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-key\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595589 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595609 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-service-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595632 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6wh\" (UniqueName: \"kubernetes.io/projected/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-kube-api-access-kl6wh\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595656 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-csi-data-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595693 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595715 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-serving-cert\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595739 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz68g\" (UniqueName: \"kubernetes.io/projected/1acfd737-9d16-428f-b839-1f5f24a7298c-kube-api-access-vz68g\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595765 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4c64782-cd14-4c8c-b74a-4cb2616edd29-metrics-tls\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595788 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84flj\" (UniqueName: \"kubernetes.io/projected/8025a3f7-bab8-4787-bada-09aceb2e001b-kube-api-access-84flj\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595811 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-webhook-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595835 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595860 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcrhn\" (UniqueName: \"kubernetes.io/projected/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-kube-api-access-tcrhn\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595965 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmr9\" (UniqueName: \"kubernetes.io/projected/835eb0a3-753e-44e0-8124-ac51072a4692-kube-api-access-rqmr9\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.595994 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596022 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596073 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpw86\" (UniqueName: \"kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596099 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-registration-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596121 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmjf4\" (UniqueName: \"kubernetes.io/projected/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-kube-api-access-rmjf4\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596144 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmrdr\" (UniqueName: \"kubernetes.io/projected/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-kube-api-access-bmrdr\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596167 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b7f3bd6-78e8-46b2-ae10-631575d200ec-proxy-tls\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596191 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1acfd737-9d16-428f-b839-1f5f24a7298c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596216 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdtd9\" (UniqueName: \"kubernetes.io/projected/b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6-kube-api-access-gdtd9\") pod \"migrator-59844c95c7-qxt6b\" (UID: \"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596239 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-images\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596265 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596290 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf698\" (UniqueName: \"kubernetes.io/projected/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-kube-api-access-sf698\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596315 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-client\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-plugins-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596379 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr4r4\" (UniqueName: \"kubernetes.io/projected/1b7f3bd6-78e8-46b2-ae10-631575d200ec-kube-api-access-rr4r4\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596406 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596431 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596452 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvvh9\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596476 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596509 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72hvc\" (UniqueName: \"kubernetes.io/projected/e78a5905-c297-4c98-81ae-a8a194a86c37-kube-api-access-72hvc\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596532 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-srv-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596555 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-srv-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596579 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596601 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596624 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596651 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596693 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4c64782-cd14-4c8c-b74a-4cb2616edd29-trusted-ca\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596737 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4ntp\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-kube-api-access-k4ntp\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596761 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-socket-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596828 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6mxz\" (UniqueName: \"kubernetes.io/projected/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-kube-api-access-x6mxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596855 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44fcd9c6-d991-4e6a-903d-bd23c6123d47-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596901 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-metrics-tls\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596927 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596949 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.596974 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-config-volume\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597025 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597066 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82f0fdc8-482e-4dde-8ecd-3607d5548331-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597090 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e78a5905-c297-4c98-81ae-a8a194a86c37-tmpfs\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597113 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8025a3f7-bab8-4787-bada-09aceb2e001b-config\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597139 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597165 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9z4l\" (UniqueName: \"kubernetes.io/projected/30c86a77-cd80-40e2-a04a-acee06763136-kube-api-access-k9z4l\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597191 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fcd9c6-d991-4e6a-903d-bd23c6123d47-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597214 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d181976c-fe3f-40f0-a8e4-5b1774143896-cert\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597236 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-certs\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597260 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9rv\" (UniqueName: \"kubernetes.io/projected/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-kube-api-access-ch9rv\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597289 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82f0fdc8-482e-4dde-8ecd-3607d5548331-proxy-tls\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597311 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44fcd9c6-d991-4e6a-903d-bd23c6123d47-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597335 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597362 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkg2b\" (UniqueName: \"kubernetes.io/projected/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-kube-api-access-vkg2b\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597386 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597410 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lsj4\" (UniqueName: \"kubernetes.io/projected/82f0fdc8-482e-4dde-8ecd-3607d5548331-kube-api-access-8lsj4\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597451 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfx6j\" (UniqueName: \"kubernetes.io/projected/d181976c-fe3f-40f0-a8e4-5b1774143896-kube-api-access-cfx6j\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597476 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-cabundle\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597498 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8025a3f7-bab8-4787-bada-09aceb2e001b-serving-cert\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597522 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vp6\" (UniqueName: \"kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597547 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-mountpoint-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.597570 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-node-bootstrap-token\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.597716 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.097693421 +0000 UTC m=+159.060230377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.599346 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-config\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.600195 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/82f0fdc8-482e-4dde-8ecd-3607d5548331-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.600512 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e78a5905-c297-4c98-81ae-a8a194a86c37-tmpfs\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.603405 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-auth-proxy-config\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.603757 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.604121 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4c64782-cd14-4c8c-b74a-4cb2616edd29-trusted-ca\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.605064 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.605226 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-cabundle\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.608618 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.609746 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-apiservice-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.610217 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-profile-collector-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.610305 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-srv-cert\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.610255 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.611098 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44fcd9c6-d991-4e6a-903d-bd23c6123d47-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.611222 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8025a3f7-bab8-4787-bada-09aceb2e001b-config\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.611241 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-service-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.611256 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-config\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.611768 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/82f0fdc8-482e-4dde-8ecd-3607d5548331-proxy-tls\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.615621 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-signing-key\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.619533 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-srv-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.623191 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e78a5905-c297-4c98-81ae-a8a194a86c37-webhook-cert\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.624023 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-ca\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.624353 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-serving-cert\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.624938 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1b7f3bd6-78e8-46b2-ae10-631575d200ec-images\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.627240 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b4c64782-cd14-4c8c-b74a-4cb2616edd29-metrics-tls\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.627667 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.629311 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fcd9c6-d991-4e6a-903d-bd23c6123d47-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.631579 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/835eb0a3-753e-44e0-8124-ac51072a4692-etcd-client\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.636087 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf698\" (UniqueName: \"kubernetes.io/projected/d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c-kube-api-access-sf698\") pod \"service-ca-9c57cc56f-lgmhx\" (UID: \"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.638425 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1b7f3bd6-78e8-46b2-ae10-631575d200ec-proxy-tls\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.638786 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8025a3f7-bab8-4787-bada-09aceb2e001b-serving-cert\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.639760 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.642260 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.646251 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmrdr\" (UniqueName: \"kubernetes.io/projected/8ca81e36-1ef9-4b49-95a1-9c01f29afc81-kube-api-access-bmrdr\") pod \"catalog-operator-68c6474976-zvgw5\" (UID: \"8ca81e36-1ef9-4b49-95a1-9c01f29afc81\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.646500 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-profile-collector-cert\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.647697 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.653553 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44fcd9c6-d991-4e6a-903d-bd23c6123d47-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4b6xn\" (UID: \"44fcd9c6-d991-4e6a-903d-bd23c6123d47\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.653656 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4ntp\" (UniqueName: \"kubernetes.io/projected/b4c64782-cd14-4c8c-b74a-4cb2616edd29-kube-api-access-k4ntp\") pod \"ingress-operator-5b745b69d9-xmjc6\" (UID: \"b4c64782-cd14-4c8c-b74a-4cb2616edd29\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.654680 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84flj\" (UniqueName: \"kubernetes.io/projected/8025a3f7-bab8-4787-bada-09aceb2e001b-kube-api-access-84flj\") pod \"service-ca-operator-777779d784-8d9bj\" (UID: \"8025a3f7-bab8-4787-bada-09aceb2e001b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.655340 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr4r4\" (UniqueName: \"kubernetes.io/projected/1b7f3bd6-78e8-46b2-ae10-631575d200ec-kube-api-access-rr4r4\") pod \"machine-config-operator-74547568cd-fm9qg\" (UID: \"1b7f3bd6-78e8-46b2-ae10-631575d200ec\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.656110 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.656719 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72hvc\" (UniqueName: \"kubernetes.io/projected/e78a5905-c297-4c98-81ae-a8a194a86c37-kube-api-access-72hvc\") pod \"packageserver-d55dfcdfc-7lkq5\" (UID: \"e78a5905-c297-4c98-81ae-a8a194a86c37\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.656741 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdtd9\" (UniqueName: \"kubernetes.io/projected/b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6-kube-api-access-gdtd9\") pod \"migrator-59844c95c7-qxt6b\" (UID: \"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.658533 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.662860 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6wh\" (UniqueName: \"kubernetes.io/projected/6b3e5556-d548-4ffc-a8a2-7b476164f5b7-kube-api-access-kl6wh\") pod \"package-server-manager-789f6589d5-6lvdj\" (UID: \"6b3e5556-d548-4ffc-a8a2-7b476164f5b7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.663786 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.663865 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmr9\" (UniqueName: \"kubernetes.io/projected/835eb0a3-753e-44e0-8124-ac51072a4692-kube-api-access-rqmr9\") pod \"etcd-operator-b45778765-dgtbf\" (UID: \"835eb0a3-753e-44e0-8124-ac51072a4692\") " pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.664940 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcrhn\" (UniqueName: \"kubernetes.io/projected/e1fde0c6-c891-4ccf-8947-3fbdfe8c243e-kube-api-access-tcrhn\") pod \"multus-admission-controller-857f4d67dd-drjfj\" (UID: \"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.665002 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6mxz\" (UniqueName: \"kubernetes.io/projected/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-kube-api-access-x6mxz\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.665243 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbfa5fb1-8fb8-41ef-805d-1034cf88853a-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sx899\" (UID: \"dbfa5fb1-8fb8-41ef-805d-1034cf88853a\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.675086 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvvh9\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.675097 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lsj4\" (UniqueName: \"kubernetes.io/projected/82f0fdc8-482e-4dde-8ecd-3607d5548331-kube-api-access-8lsj4\") pod \"machine-config-controller-84d6567774-kt85s\" (UID: \"82f0fdc8-482e-4dde-8ecd-3607d5548331\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.676066 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkg2b\" (UniqueName: \"kubernetes.io/projected/df34382d-7e6c-47e3-9b9f-e9f9498faaa0-kube-api-access-vkg2b\") pod \"olm-operator-6b444d44fb-q2v7l\" (UID: \"df34382d-7e6c-47e3-9b9f-e9f9498faaa0\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.687730 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5081b0aa-9d4f-4741-9c05-4aab3e514f1b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kbvkk\" (UID: \"5081b0aa-9d4f-4741-9c05-4aab3e514f1b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.690220 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.698962 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-csi-data-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699020 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz68g\" (UniqueName: \"kubernetes.io/projected/1acfd737-9d16-428f-b839-1f5f24a7298c-kube-api-access-vz68g\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699063 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699093 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699125 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpw86\" (UniqueName: \"kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699125 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-csi-data-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699151 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-registration-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699178 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmjf4\" (UniqueName: \"kubernetes.io/projected/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-kube-api-access-rmjf4\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699210 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1acfd737-9d16-428f-b839-1f5f24a7298c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699235 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-plugins-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699266 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699308 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699339 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-socket-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699380 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-metrics-tls\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699419 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-config-volume\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699521 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9z4l\" (UniqueName: \"kubernetes.io/projected/30c86a77-cd80-40e2-a04a-acee06763136-kube-api-access-k9z4l\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699550 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d181976c-fe3f-40f0-a8e4-5b1774143896-cert\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699577 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-certs\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch9rv\" (UniqueName: \"kubernetes.io/projected/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-kube-api-access-ch9rv\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699639 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfx6j\" (UniqueName: \"kubernetes.io/projected/d181976c-fe3f-40f0-a8e4-5b1774143896-kube-api-access-cfx6j\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699667 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vp6\" (UniqueName: \"kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699693 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-mountpoint-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699705 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-registration-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699718 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-node-bootstrap-token\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699904 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.699933 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acfd737-9d16-428f-b839-1f5f24a7298c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.700103 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1acfd737-9d16-428f-b839-1f5f24a7298c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.701226 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.701578 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-plugins-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.702117 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-socket-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.702537 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.202515261 +0000 UTC m=+159.165052217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.703991 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.705261 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-mountpoint-dir\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.705518 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.706249 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-config-volume\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.708904 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.710720 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.710969 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-metrics-tls\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.711701 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d181976c-fe3f-40f0-a8e4-5b1774143896-cert\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.713100 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-node-bootstrap-token\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.721194 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.723723 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/30c86a77-cd80-40e2-a04a-acee06763136-certs\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.727744 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpw86\" (UniqueName: \"kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86\") pod \"collect-profiles-29414370-pphl9\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.728040 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.729026 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1acfd737-9d16-428f-b839-1f5f24a7298c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.732131 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.748760 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.757561 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmjf4\" (UniqueName: \"kubernetes.io/projected/2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f-kube-api-access-rmjf4\") pod \"csi-hostpathplugin-8dtmz\" (UID: \"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f\") " pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.769748 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9z4l\" (UniqueName: \"kubernetes.io/projected/30c86a77-cd80-40e2-a04a-acee06763136-kube-api-access-k9z4l\") pod \"machine-config-server-hwzql\" (UID: \"30c86a77-cd80-40e2-a04a-acee06763136\") " pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.781126 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.793351 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.800628 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.800784 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.300745892 +0000 UTC m=+159.263282848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.800975 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.801389 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.301377268 +0000 UTC m=+159.263914414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.802371 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch9rv\" (UniqueName: \"kubernetes.io/projected/2ed08c9a-e799-4301-a2f1-fec6ffa81c45-kube-api-access-ch9rv\") pod \"dns-default-cjht8\" (UID: \"2ed08c9a-e799-4301-a2f1-fec6ffa81c45\") " pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.828519 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz68g\" (UniqueName: \"kubernetes.io/projected/1acfd737-9d16-428f-b839-1f5f24a7298c-kube-api-access-vz68g\") pod \"kube-storage-version-migrator-operator-b67b599dd-n8vsq\" (UID: \"1acfd737-9d16-428f-b839-1f5f24a7298c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.836998 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vp6\" (UniqueName: \"kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6\") pod \"marketplace-operator-79b997595-75pd8\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.856003 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.861729 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfx6j\" (UniqueName: \"kubernetes.io/projected/d181976c-fe3f-40f0-a8e4-5b1774143896-kube-api-access-cfx6j\") pod \"ingress-canary-cpnks\" (UID: \"d181976c-fe3f-40f0-a8e4-5b1774143896\") " pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.885401 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.893791 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.901149 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.901957 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:34 crc kubenswrapper[4878]: E1204 15:38:34.903183 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.40315005 +0000 UTC m=+159.365687186 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.916318 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.918442 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.925385 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.940521 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.944183 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.949598 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.960227 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.972302 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.977819 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-v4cwq"] Dec 04 15:38:34 crc kubenswrapper[4878]: I1204 15:38:34.981596 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-h2w2r"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.006755 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.007279 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.507255962 +0000 UTC m=+159.469792938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: W1204 15:38:35.011061 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod988eba95_b990_4f5a_ad25_e4129a8849d1.slice/crio-d1ff95bfcab48fdded180d5a66503fe461ee33d1eadd443944827cb9eadd4891 WatchSource:0}: Error finding container d1ff95bfcab48fdded180d5a66503fe461ee33d1eadd443944827cb9eadd4891: Status 404 returned error can't find the container with id d1ff95bfcab48fdded180d5a66503fe461ee33d1eadd443944827cb9eadd4891 Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.011227 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" event={"ID":"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f","Type":"ContainerStarted","Data":"0015b6179e3187d42cec5474be12d44e1b430d68e0482c7af27afa4782a2d21b"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.016464 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" event={"ID":"0c31dded-d5e0-4f14-8de8-c4cf3ec56236","Type":"ContainerStarted","Data":"e838de45a687a1c64fa153c01c85c9bb9c1185c3459e9176f708886281923aa0"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.017559 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.023589 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" event={"ID":"cc53410b-3bb5-45cf-aa14-ca460c71e5f0","Type":"ContainerStarted","Data":"2f8e6d87bb130acbb18b5b5b77472beda81d6c5b1b2eb40b15c3a0fdf0d82b3d"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.023916 4878 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-g9zqn container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.023959 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.032654 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" event={"ID":"bc4ac80a-bd9e-41b4-9954-219008ad570d","Type":"ContainerStarted","Data":"c840a626b79f3e34937858a835c150d7b3735b168b7764cd41955b4506de5cc1"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.035412 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" event={"ID":"46626fe2-913c-4594-b04a-fce651a9924f","Type":"ContainerStarted","Data":"d7c5081d20622ff5303592d930bca46bbbe28800b1c16ada73cc7ba116ff18bf"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.043606 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" event={"ID":"efc42132-e5bc-4a5e-90ee-bf34ad2a286b","Type":"ContainerStarted","Data":"9bc2b32de3f299d159a0a340b728f67f58f4175d2f1a701365e1a3e4a669cca8"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.054032 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-hwzql" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.066778 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" event={"ID":"bf1f752c-0d46-4655-930a-c063d386b3c9","Type":"ContainerStarted","Data":"4994f7944145f8d4e4b67aeb01b89745c871ae7ce1d3e3a1794f5b3bbb58b302"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.069231 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pwnk4" event={"ID":"199d51ae-0d72-4e64-a8eb-546c07076c21","Type":"ContainerStarted","Data":"1252eabd5c0e9e73e42a6d9c86cd801502b74c017acda31857b210dd29d10aed"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.070235 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.100677 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.117301 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-cpnks" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.117852 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.117981 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.617951662 +0000 UTC m=+159.580488618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.119921 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.121279 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.621265827 +0000 UTC m=+159.583802853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.146918 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" event={"ID":"c178d3ca-882b-4143-bad1-b648220f66c7","Type":"ContainerStarted","Data":"18cb90d259e306663979b2285bbb7e0ad03939d133f7aca640fce0732dde98e7"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.155441 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" event={"ID":"1fa17e12-0683-4fba-810b-fa1c10a2738f","Type":"ContainerStarted","Data":"cf9264963dbfad1ccf2769c6ae3a03b2fea328dea177d19620ea162801afa729"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.173383 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" event={"ID":"99943e7a-151b-4129-9205-f7e78e43fd3c","Type":"ContainerStarted","Data":"e3991dd69713318c576d55ff76f1acc9e67e4ccdd5351f824aab94c7d4fee656"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.223492 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.225152 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.725118433 +0000 UTC m=+159.687655389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.226997 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.227039 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" event={"ID":"1c451d04-9071-4d89-a6aa-a26e07523cf6","Type":"ContainerStarted","Data":"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.232799 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" event={"ID":"7689249c-9002-4b86-ba80-67bff6b584c4","Type":"ContainerStarted","Data":"0f5ce35a81193bac13a22d0b635a1f0909fc9621dcde9105294c3d6962294eb9"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.233787 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.234577 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.236054 4878 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-br92t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.236098 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.247266 4878 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zvbhg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.247323 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.247795 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" event={"ID":"620354cc-25a3-433f-9ee7-af4ed1f94827","Type":"ContainerStarted","Data":"a250146b655c78d1c2049bc74845a291f0e5f9d897d99f2772cee3c2ee640189"} Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.304066 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.325241 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.327280 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.827257054 +0000 UTC m=+159.789794200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.400629 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.426828 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.427430 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:35.927396544 +0000 UTC m=+159.889933500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.440350 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8dtmz"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.528902 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.529915 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.029897154 +0000 UTC m=+159.992434110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.632240 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.632402 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.132378924 +0000 UTC m=+160.094915870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.632895 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.633419 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.13339477 +0000 UTC m=+160.095931796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.733711 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.734012 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.233829888 +0000 UTC m=+160.196366844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.734421 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.734927 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.234915626 +0000 UTC m=+160.197452582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.836780 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.837083 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.337064557 +0000 UTC m=+160.299601513 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.859197 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" podStartSLOduration=136.859175274 podStartE2EDuration="2m16.859175274s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:35.857717327 +0000 UTC m=+159.820254283" watchObservedRunningTime="2025-12-04 15:38:35.859175274 +0000 UTC m=+159.821712230" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.863696 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" podStartSLOduration=136.86367264 podStartE2EDuration="2m16.86367264s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:35.818418138 +0000 UTC m=+159.780955094" watchObservedRunningTime="2025-12-04 15:38:35.86367264 +0000 UTC m=+159.826209596" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.883653 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.908966 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" podStartSLOduration=136.908945212 podStartE2EDuration="2m16.908945212s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:35.908462369 +0000 UTC m=+159.870999325" watchObservedRunningTime="2025-12-04 15:38:35.908945212 +0000 UTC m=+159.871482168" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.924814 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s"] Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.952378 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:35 crc kubenswrapper[4878]: E1204 15:38:35.960313 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.460293699 +0000 UTC m=+160.422830645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.975693 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8zkjc" podStartSLOduration=136.974615007 podStartE2EDuration="2m16.974615007s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:35.94510775 +0000 UTC m=+159.907644726" watchObservedRunningTime="2025-12-04 15:38:35.974615007 +0000 UTC m=+159.937151963" Dec 04 15:38:35 crc kubenswrapper[4878]: I1204 15:38:35.984051 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" podStartSLOduration=135.984020178 podStartE2EDuration="2m15.984020178s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:35.980636671 +0000 UTC m=+159.943173627" watchObservedRunningTime="2025-12-04 15:38:35.984020178 +0000 UTC m=+159.946557134" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.017680 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.053450 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.053982 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.553959433 +0000 UTC m=+160.516496389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.066897 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.156318 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.157154 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.657134691 +0000 UTC m=+160.619671657 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.278774 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.281705 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.781653457 +0000 UTC m=+160.744190413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.297062 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.368474 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" event={"ID":"bc4ac80a-bd9e-41b4-9954-219008ad570d","Type":"ContainerStarted","Data":"7b0cd8f505de794c0936e8af7b81f9fb6a224da665adf7dd64436478a56b24ba"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.369716 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.379491 4878 patch_prober.go:28] interesting pod/console-operator-58897d9998-tcd9t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.379577 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" podUID="bc4ac80a-bd9e-41b4-9954-219008ad570d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.381829 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.382234 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.882219587 +0000 UTC m=+160.844756543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.396662 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" podStartSLOduration=137.396641707 podStartE2EDuration="2m17.396641707s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.395994721 +0000 UTC m=+160.358531707" watchObservedRunningTime="2025-12-04 15:38:36.396641707 +0000 UTC m=+160.359178663" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.400485 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hwzql" event={"ID":"30c86a77-cd80-40e2-a04a-acee06763136","Type":"ContainerStarted","Data":"68cf2de9a636ad08ff91c987b09df7cab637cac26b09c5fd8ecdf7fc7cfff401"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.409779 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4x82r" event={"ID":"988eba95-b990-4f5a-ad25-e4129a8849d1","Type":"ContainerStarted","Data":"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.410278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4x82r" event={"ID":"988eba95-b990-4f5a-ad25-e4129a8849d1","Type":"ContainerStarted","Data":"d1ff95bfcab48fdded180d5a66503fe461ee33d1eadd443944827cb9eadd4891"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.450146 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgmhx"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.450789 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4x82r" podStartSLOduration=137.450748906 podStartE2EDuration="2m17.450748906s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.442545435 +0000 UTC m=+160.405082391" watchObservedRunningTime="2025-12-04 15:38:36.450748906 +0000 UTC m=+160.413285862" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.452579 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-pwnk4" event={"ID":"199d51ae-0d72-4e64-a8eb-546c07076c21","Type":"ContainerStarted","Data":"fd7c3bc4e883c8af40e8d3e917e1cd2efd54920a47c323983e4495368d8e09a6"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.464357 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" event={"ID":"5081b0aa-9d4f-4741-9c05-4aab3e514f1b","Type":"ContainerStarted","Data":"a5ac3a767a0930ae47f9adf15fb9d73083fbfb230ab0bf1aa2ff35711d5b426c"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.466789 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" event={"ID":"cc53410b-3bb5-45cf-aa14-ca460c71e5f0","Type":"ContainerStarted","Data":"8bc1e5a831e0313a07274f5436c4adc3655a483004cffc349f626883159a85dc"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.489085 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.489567 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:36.989530501 +0000 UTC m=+160.952067457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.490731 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-pwnk4" podStartSLOduration=136.490701751 podStartE2EDuration="2m16.490701751s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.488764272 +0000 UTC m=+160.451301258" watchObservedRunningTime="2025-12-04 15:38:36.490701751 +0000 UTC m=+160.453238707" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.495358 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" event={"ID":"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f","Type":"ContainerStarted","Data":"41b30df07953ee343d7d911d64897444cbe60bd136d8fd8c8a47db2b1b065b4d"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.502714 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" event={"ID":"8c993c9c-5308-4cb6-9e94-f477625c6263","Type":"ContainerStarted","Data":"88982e6d9945651ba04ef97d70402afa0fc608405840569f27c4118a2e15a3e1"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.534708 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.566074 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" event={"ID":"8f66d4fb-9cfd-4bf5-93cb-e71ae704ff9f","Type":"ContainerStarted","Data":"9f0eb5ea7d8bc3a8a742c51a54c5c9b7923874ae88ab7dedf330d85cfdd4bfd4"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.579427 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" event={"ID":"82f0fdc8-482e-4dde-8ecd-3607d5548331","Type":"ContainerStarted","Data":"a8460393dee5511151d8d0741e19459535c93b64ea99c132d9bf3c34d7059857"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.580283 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.591023 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.592440 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.092421902 +0000 UTC m=+161.054958848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.592861 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.594779 4878 generic.go:334] "Generic (PLEG): container finished" podID="c178d3ca-882b-4143-bad1-b648220f66c7" containerID="65d3cd8f4bdef74c1baf5d4e65ff0ad4e4d381020ff176a533c5af029bcf9dcf" exitCode=0 Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.594907 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" event={"ID":"c178d3ca-882b-4143-bad1-b648220f66c7","Type":"ContainerDied","Data":"65d3cd8f4bdef74c1baf5d4e65ff0ad4e4d381020ff176a533c5af029bcf9dcf"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.606243 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.613030 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ggckn" podStartSLOduration=137.61300322 podStartE2EDuration="2m17.61300322s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.603365703 +0000 UTC m=+160.565902669" watchObservedRunningTime="2025-12-04 15:38:36.61300322 +0000 UTC m=+160.575540176" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.625298 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" event={"ID":"46626fe2-913c-4594-b04a-fce651a9924f","Type":"ContainerStarted","Data":"771f670b7a91f3e918bd849eb1d741547671b5d22cdd2ee8e9d72ba9421eb137"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.642918 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" event={"ID":"1fa17e12-0683-4fba-810b-fa1c10a2738f","Type":"ContainerStarted","Data":"e9c156cde135ab4997c5bb3fc3dd97674a090c1292ec002bc3ab01ba59917a7b"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.665204 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" event={"ID":"7689249c-9002-4b86-ba80-67bff6b584c4","Type":"ContainerStarted","Data":"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.667085 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-dgtbf"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.673261 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" event={"ID":"8ca81e36-1ef9-4b49-95a1-9c01f29afc81","Type":"ContainerStarted","Data":"2a260c59996c268c17b0718cfdcb29261b626e779a14195af1b54dd8353cd181"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.674837 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" event={"ID":"b4c64782-cd14-4c8c-b74a-4cb2616edd29","Type":"ContainerStarted","Data":"feda6ef1d7d02d96accc03fc074a4c32490bb7fcf72c134396035fe628b25996"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.674906 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" event={"ID":"b4c64782-cd14-4c8c-b74a-4cb2616edd29","Type":"ContainerStarted","Data":"ae2496e3716bdd4f3bbcb06808c28ad915cbf7c2c2da368fedae38a046fbba2f"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.676263 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.677763 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" event={"ID":"cda9500b-96aa-457f-b588-cb2efd9f36e9","Type":"ContainerStarted","Data":"b1f20cf91aa15e8bbee8843971f92b11a01c858c839d1159a2479107f15b434a"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.689165 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-bqxx9" event={"ID":"620354cc-25a3-433f-9ee7-af4ed1f94827","Type":"ContainerStarted","Data":"9532dbdbcf67e3ac3ff6cc08c0f9da642dc25cbe151db955525ae198c1fb4424"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.690217 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:36 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:36 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:36 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.690305 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.691609 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.695753 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.195725483 +0000 UTC m=+161.158262439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.697383 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.698994 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.198976756 +0000 UTC m=+161.161513712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.701275 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qn9w6" podStartSLOduration=136.701256605 podStartE2EDuration="2m16.701256605s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.695411255 +0000 UTC m=+160.657948211" watchObservedRunningTime="2025-12-04 15:38:36.701256605 +0000 UTC m=+160.663793561" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.701954 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cjht8"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.705269 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-drjfj"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.713238 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" event={"ID":"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6","Type":"ContainerStarted","Data":"756f233c1409fab08f3e71e96220d8469367b99ea5b0d6fbec6a617d270dab93"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.713296 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" event={"ID":"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6","Type":"ContainerStarted","Data":"af894c301b5f9b87eb8707f6caeb986f972210398822850137757d0238bc8d9b"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.718295 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" event={"ID":"19e43204-f248-4d01-a9a8-9c264008e2fb","Type":"ContainerStarted","Data":"68ff31e1ccba8f6dda94f8d3db1f599a783482db6eaebd167017b62fc67753e3"} Dec 04 15:38:36 crc kubenswrapper[4878]: W1204 15:38:36.723508 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8025a3f7_bab8_4787_bada_09aceb2e001b.slice/crio-4e5e0f32f73333bb685e4a1015b3a4a0a4e78f530924f9709437a80d3109703a WatchSource:0}: Error finding container 4e5e0f32f73333bb685e4a1015b3a4a0a4e78f530924f9709437a80d3109703a: Status 404 returned error can't find the container with id 4e5e0f32f73333bb685e4a1015b3a4a0a4e78f530924f9709437a80d3109703a Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.724167 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-cpnks"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.726588 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.729810 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.743028 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.772393 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v4cwq" event={"ID":"af8c7a67-79c2-4892-a180-ee539e48bd2b","Type":"ContainerStarted","Data":"bdb6109f15158f29e73d78b3231807d2aefcdab2cd2ed0b3c98b67117a6433c3"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.772468 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-v4cwq" event={"ID":"af8c7a67-79c2-4892-a180-ee539e48bd2b","Type":"ContainerStarted","Data":"dd606515ca5633918a70faab2fcc2af0523899cc35d8aa2cb30d338dc02ca1c5"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.776653 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.782844 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq"] Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.786433 4878 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4cwq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.786575 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4cwq" podUID="af8c7a67-79c2-4892-a180-ee539e48bd2b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.794134 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" podStartSLOduration=137.794104947 podStartE2EDuration="2m17.794104947s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.759196682 +0000 UTC m=+160.721733628" watchObservedRunningTime="2025-12-04 15:38:36.794104947 +0000 UTC m=+160.756641903" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.800966 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.818562 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.318523454 +0000 UTC m=+161.281060410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:36 crc kubenswrapper[4878]: W1204 15:38:36.821212 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b7f3bd6_78e8_46b2_ae10_631575d200ec.slice/crio-7592e4219ce54d6cb672cd668327186c53af00bf4cf1e51a9db652c11d2b5ebd WatchSource:0}: Error finding container 7592e4219ce54d6cb672cd668327186c53af00bf4cf1e51a9db652c11d2b5ebd: Status 404 returned error can't find the container with id 7592e4219ce54d6cb672cd668327186c53af00bf4cf1e51a9db652c11d2b5ebd Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.822379 4878 generic.go:334] "Generic (PLEG): container finished" podID="99943e7a-151b-4129-9205-f7e78e43fd3c" containerID="1eb934c576eeecab436795e874c421aa69c90e15d2c3c78d4a5b5b31b1508f40" exitCode=0 Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.822506 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" event={"ID":"99943e7a-151b-4129-9205-f7e78e43fd3c","Type":"ContainerDied","Data":"1eb934c576eeecab436795e874c421aa69c90e15d2c3c78d4a5b5b31b1508f40"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.845773 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-v4cwq" podStartSLOduration=137.845752603 podStartE2EDuration="2m17.845752603s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:36.838254761 +0000 UTC m=+160.800791707" watchObservedRunningTime="2025-12-04 15:38:36.845752603 +0000 UTC m=+160.808289559" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.849009 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" event={"ID":"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba","Type":"ContainerStarted","Data":"a9858762909495b0b16dfd9cb6257559bf20c1289da74fe0bb2665e5b521376e"} Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.880935 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.881117 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:38:36 crc kubenswrapper[4878]: I1204 15:38:36.917071 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:36 crc kubenswrapper[4878]: E1204 15:38:36.918549 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.418531531 +0000 UTC m=+161.381068487 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.024586 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.024765 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.524730076 +0000 UTC m=+161.487267032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.025445 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.032582 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.532557287 +0000 UTC m=+161.495094323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.127506 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.128274 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.628250043 +0000 UTC m=+161.590786999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.230534 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.230917 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.730896767 +0000 UTC m=+161.693433723 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.334974 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.335437 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.835393718 +0000 UTC m=+161.797930674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.451052 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.459830 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:37.95980359 +0000 UTC m=+161.922340546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.553566 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.554086 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.054061879 +0000 UTC m=+162.016598845 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.657131 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.657596 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.157577736 +0000 UTC m=+162.120114692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.684784 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:37 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:37 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:37 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.684853 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.758534 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.759289 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.259245095 +0000 UTC m=+162.221782041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.870705 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.871333 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.371316711 +0000 UTC m=+162.333853667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.903429 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" event={"ID":"e78a5905-c297-4c98-81ae-a8a194a86c37","Type":"ContainerStarted","Data":"e4297c740ffcbcfe2ffb93baa1a3f67d62373e5a4de8ac9bc60194ad68205236"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.903481 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" event={"ID":"e78a5905-c297-4c98-81ae-a8a194a86c37","Type":"ContainerStarted","Data":"5a7fff5fa54196743dd6452ba63276b0688677262baa23542d72ba97fdde2f8b"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.905543 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.907777 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" event={"ID":"b6dcbfc3-4f5f-4baf-9a44-9dbe1bc151a6","Type":"ContainerStarted","Data":"f78a9aad7707f3b55366bf90a674dfd916fcb7afb0126c43d4d8f8a3151409bb"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.908238 4878 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7lkq5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.908293 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" podUID="e78a5905-c297-4c98-81ae-a8a194a86c37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.920987 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" event={"ID":"82f0fdc8-482e-4dde-8ecd-3607d5548331","Type":"ContainerStarted","Data":"5bd01d15a6a26b9bde58a9842d09d313401f825e1cea074a23d19b8c1449754e"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.933127 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" event={"ID":"df34382d-7e6c-47e3-9b9f-e9f9498faaa0","Type":"ContainerStarted","Data":"f7ddb385711f81c2231a65463d6a02a99c8a4c6aeb6f95525d5fb29befe80d29"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.933189 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" event={"ID":"df34382d-7e6c-47e3-9b9f-e9f9498faaa0","Type":"ContainerStarted","Data":"e99e1c84275a785af3c205617894f2d286fc61db9c482149f58a1ffaaee7ead1"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.934062 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.935165 4878 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-q2v7l container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.935211 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" podUID="df34382d-7e6c-47e3-9b9f-e9f9498faaa0" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.957507 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qxt6b" podStartSLOduration=137.957485502 podStartE2EDuration="2m17.957485502s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:37.95624386 +0000 UTC m=+161.918780816" watchObservedRunningTime="2025-12-04 15:38:37.957485502 +0000 UTC m=+161.920022458" Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.960155 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" event={"ID":"cda9500b-96aa-457f-b588-cb2efd9f36e9","Type":"ContainerStarted","Data":"1c266b8a07571f206f8684f423d6e8e77c1588a173efe17be92c6d69967d1d56"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.970458 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-hwzql" event={"ID":"30c86a77-cd80-40e2-a04a-acee06763136","Type":"ContainerStarted","Data":"a0105bd068b725d69a2e53a78c10e2a98b2176b390ad4e59ae872b9287d3a827"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.976570 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:37 crc kubenswrapper[4878]: E1204 15:38:37.977980 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.477958708 +0000 UTC m=+162.440495664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.991107 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cjht8" event={"ID":"2ed08c9a-e799-4301-a2f1-fec6ffa81c45","Type":"ContainerStarted","Data":"830d425ecfbf608cfc427084066dec90f8ad15f22857c98c0f2d95827ad26e22"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.997759 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" event={"ID":"dbfa5fb1-8fb8-41ef-805d-1034cf88853a","Type":"ContainerStarted","Data":"662e1ac59c79e538b1216e7f772d92964fac2f01ba9866bdfef8c4ee3384078c"} Dec 04 15:38:37 crc kubenswrapper[4878]: I1204 15:38:37.997814 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" event={"ID":"dbfa5fb1-8fb8-41ef-805d-1034cf88853a","Type":"ContainerStarted","Data":"9e763b407fcabf42214c8a3d00a9bcbe76830b12b65f07a7aa139f8b12c58dbc"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.004234 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" event={"ID":"bf1f752c-0d46-4655-930a-c063d386b3c9","Type":"ContainerStarted","Data":"7eac72c7b7008142c92f824998eacb8d4e805505bac193cc2be980316905ce26"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.005203 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" podStartSLOduration=138.005181576 podStartE2EDuration="2m18.005181576s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.004466308 +0000 UTC m=+161.967003264" watchObservedRunningTime="2025-12-04 15:38:38.005181576 +0000 UTC m=+161.967718532" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.028322 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" event={"ID":"1437aa02-6698-481c-ab03-8b2c02f64774","Type":"ContainerStarted","Data":"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.028379 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" event={"ID":"1437aa02-6698-481c-ab03-8b2c02f64774","Type":"ContainerStarted","Data":"d70ad031b6aadbf2e50c2f30b46289539187eb306ee5b4a4d11091afe318dcbb"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.028848 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.032675 4878 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-75pd8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.032724 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.036848 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" event={"ID":"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c","Type":"ContainerStarted","Data":"70c23f64645368cb878d353f2806d6c12f6a388d2632bc9ab0d11efb1e4edb57"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.056097 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sx899" podStartSLOduration=138.056073352 podStartE2EDuration="2m18.056073352s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.038903952 +0000 UTC m=+162.001440928" watchObservedRunningTime="2025-12-04 15:38:38.056073352 +0000 UTC m=+162.018610308" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.083771 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.093388 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" event={"ID":"c178d3ca-882b-4143-bad1-b648220f66c7","Type":"ContainerStarted","Data":"5b2c155ca19853199cc6b7978e3c57d76d52692da68cc862515e76f8f4b78363"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.093811 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.101336 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" podStartSLOduration=138.101306473 podStartE2EDuration="2m18.101306473s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.098346787 +0000 UTC m=+162.060883743" watchObservedRunningTime="2025-12-04 15:38:38.101306473 +0000 UTC m=+162.063843429" Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.106880 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.606832615 +0000 UTC m=+162.569369571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.107115 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cpnks" event={"ID":"d181976c-fe3f-40f0-a8e4-5b1774143896","Type":"ContainerStarted","Data":"686ede75664bcf9e3f6f1ab82ace73c07512d38b090fcd2f2c3749c531d1d713"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.115172 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" event={"ID":"44fcd9c6-d991-4e6a-903d-bd23c6123d47","Type":"ContainerStarted","Data":"1add57006065a1a48c0d36a65e77a2239208041b20403f6ef220955927cef823"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.129672 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" event={"ID":"8c993c9c-5308-4cb6-9e94-f477625c6263","Type":"ContainerStarted","Data":"253e182a327b824261a17eec932c4cee88d490195b16b037b75af2e5c996d0f3"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.158832 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" event={"ID":"8ca81e36-1ef9-4b49-95a1-9c01f29afc81","Type":"ContainerStarted","Data":"edb03a9cbb536681f6a494186e4670db15aecbb3cd4e5db5b4f7faae06bcf718"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.161707 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.173468 4878 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zvgw5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.173583 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" podUID="8ca81e36-1ef9-4b49-95a1-9c01f29afc81" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.189801 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" event={"ID":"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e","Type":"ContainerStarted","Data":"cdcac39be055aada5d5921fe9072ac348934788aa0513d8c22fb3e6aafaa9a8a"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.198940 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wjfks" podStartSLOduration=138.198844646 podStartE2EDuration="2m18.198844646s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.148040802 +0000 UTC m=+162.110577758" watchObservedRunningTime="2025-12-04 15:38:38.198844646 +0000 UTC m=+162.161381602" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.199072 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-hwzql" podStartSLOduration=7.199065752 podStartE2EDuration="7.199065752s" podCreationTimestamp="2025-12-04 15:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.17054179 +0000 UTC m=+162.133078746" watchObservedRunningTime="2025-12-04 15:38:38.199065752 +0000 UTC m=+162.161602718" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.202678 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.207289 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.707264542 +0000 UTC m=+162.669801498 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.207695 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-2bchj" podStartSLOduration=139.207671033 podStartE2EDuration="2m19.207671033s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.197369638 +0000 UTC m=+162.159906614" watchObservedRunningTime="2025-12-04 15:38:38.207671033 +0000 UTC m=+162.170207989" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.212939 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" event={"ID":"5081b0aa-9d4f-4741-9c05-4aab3e514f1b","Type":"ContainerStarted","Data":"6a5e9ac40889d4fe8afa12369c6e3d8406d3c189d3e92dc658f92cef0067543a"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.214768 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" event={"ID":"835eb0a3-753e-44e0-8124-ac51072a4692","Type":"ContainerStarted","Data":"3e6b04dca8333eb727f27f06d520c7b00f46069de3b3b4e7242e1cba2027aa21"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.215422 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" event={"ID":"1b7f3bd6-78e8-46b2-ae10-631575d200ec","Type":"ContainerStarted","Data":"7592e4219ce54d6cb672cd668327186c53af00bf4cf1e51a9db652c11d2b5ebd"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.217950 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" event={"ID":"cc53410b-3bb5-45cf-aa14-ca460c71e5f0","Type":"ContainerStarted","Data":"e55207779d5df376e767dd72ee2888322978cda8745d36eefda5c323ae4aa571"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.219388 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" event={"ID":"6b3e5556-d548-4ffc-a8a2-7b476164f5b7","Type":"ContainerStarted","Data":"2e20a0a289ae393a1c4a64323a2915440b1a5540fe56510e08f0dcdb36909854"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.220034 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" event={"ID":"8025a3f7-bab8-4787-bada-09aceb2e001b","Type":"ContainerStarted","Data":"4e5e0f32f73333bb685e4a1015b3a4a0a4e78f530924f9709437a80d3109703a"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.221167 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" event={"ID":"1fa17e12-0683-4fba-810b-fa1c10a2738f","Type":"ContainerStarted","Data":"51a6ce33134eacefc9eea5be611f06c0a702f9cc0b23a8bb3a9de538f5688bd3"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.233753 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" podStartSLOduration=138.233737032 podStartE2EDuration="2m18.233737032s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.232147921 +0000 UTC m=+162.194684877" watchObservedRunningTime="2025-12-04 15:38:38.233737032 +0000 UTC m=+162.196273988" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.234001 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" event={"ID":"1acfd737-9d16-428f-b839-1f5f24a7298c","Type":"ContainerStarted","Data":"508badbc5b9ab9635b692178564f5716d098f723da6fa220a128423bf9485f80"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.239468 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" event={"ID":"19e43204-f248-4d01-a9a8-9c264008e2fb","Type":"ContainerStarted","Data":"2be8d263d74396cd579ec25b6078d659548818ef985a10cf8eb441d9d7ae2978"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.250419 4878 generic.go:334] "Generic (PLEG): container finished" podID="9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba" containerID="949fcb9d7d2beb328ef82a49e36175c807f0c523f2d30a487da63516ff052b84" exitCode=0 Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.250539 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" event={"ID":"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba","Type":"ContainerDied","Data":"949fcb9d7d2beb328ef82a49e36175c807f0c523f2d30a487da63516ff052b84"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.257519 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" podStartSLOduration=139.257494611 podStartE2EDuration="2m19.257494611s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.254393812 +0000 UTC m=+162.216930768" watchObservedRunningTime="2025-12-04 15:38:38.257494611 +0000 UTC m=+162.220031567" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.274203 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" event={"ID":"b4c64782-cd14-4c8c-b74a-4cb2616edd29","Type":"ContainerStarted","Data":"091935cc052caeaa06384177901001f501245097176b80c881905288d8a79e6b"} Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.279023 4878 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4cwq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.279083 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4cwq" podUID="af8c7a67-79c2-4892-a180-ee539e48bd2b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.308531 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.309743 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.809725792 +0000 UTC m=+162.772262748 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.318482 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" podStartSLOduration=138.318459346 podStartE2EDuration="2m18.318459346s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.275480103 +0000 UTC m=+162.238017069" watchObservedRunningTime="2025-12-04 15:38:38.318459346 +0000 UTC m=+162.280996332" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.320788 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xmjc6" podStartSLOduration=138.320775205 podStartE2EDuration="2m18.320775205s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.317232734 +0000 UTC m=+162.279769710" watchObservedRunningTime="2025-12-04 15:38:38.320775205 +0000 UTC m=+162.283312161" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.348617 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kbvkk" podStartSLOduration=138.348592259 podStartE2EDuration="2m18.348592259s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.347517341 +0000 UTC m=+162.310054307" watchObservedRunningTime="2025-12-04 15:38:38.348592259 +0000 UTC m=+162.311129215" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.413174 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.415121 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:38.915094086 +0000 UTC m=+162.877631062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.457147 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-7mgms" podStartSLOduration=138.457121894 podStartE2EDuration="2m18.457121894s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.421325936 +0000 UTC m=+162.383862892" watchObservedRunningTime="2025-12-04 15:38:38.457121894 +0000 UTC m=+162.419658850" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.515678 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.516719 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tcd9t" Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.518147 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.01812038 +0000 UTC m=+162.980657336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.530713 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-j5bq2" podStartSLOduration=138.530693072 podStartE2EDuration="2m18.530693072s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:38.516892658 +0000 UTC m=+162.479429614" watchObservedRunningTime="2025-12-04 15:38:38.530693072 +0000 UTC m=+162.493230028" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.623510 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.624255 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.124235223 +0000 UTC m=+163.086772179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.693655 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:38 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:38 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:38 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.693722 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.726976 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.727639 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.227624966 +0000 UTC m=+163.190161922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.829040 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.829528 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.329508751 +0000 UTC m=+163.292045707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:38 crc kubenswrapper[4878]: I1204 15:38:38.930827 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:38 crc kubenswrapper[4878]: E1204 15:38:38.931279 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.431261182 +0000 UTC m=+163.393798138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.032505 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.033228 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.533202709 +0000 UTC m=+163.495739655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.134929 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.135366 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.63534865 +0000 UTC m=+163.597885606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.235970 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.236211 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.736158297 +0000 UTC m=+163.698695253 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.236626 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.237137 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.737111681 +0000 UTC m=+163.699648637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.335709 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" event={"ID":"44fcd9c6-d991-4e6a-903d-bd23c6123d47","Type":"ContainerStarted","Data":"525d6e5a670bda9ee1e9b95d5ecdb1033043669d66a2bba912a1800c9af7128d"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.337107 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.337606 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.837568229 +0000 UTC m=+163.800105195 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.368818 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" event={"ID":"99943e7a-151b-4129-9205-f7e78e43fd3c","Type":"ContainerStarted","Data":"e673b25ea839dbef73d5011d27a2d710260783eb0fe884c36414e7b790052712"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.373254 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4b6xn" podStartSLOduration=139.373235724 podStartE2EDuration="2m19.373235724s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.372060594 +0000 UTC m=+163.334597550" watchObservedRunningTime="2025-12-04 15:38:39.373235724 +0000 UTC m=+163.335772680" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.380317 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" event={"ID":"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f","Type":"ContainerStarted","Data":"d07eb371915e944d0115b2ef667297024203b002bcf8f1877f3b8966bf189343"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.421945 4878 generic.go:334] "Generic (PLEG): container finished" podID="19e43204-f248-4d01-a9a8-9c264008e2fb" containerID="2be8d263d74396cd579ec25b6078d659548818ef985a10cf8eb441d9d7ae2978" exitCode=0 Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.422313 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" event={"ID":"19e43204-f248-4d01-a9a8-9c264008e2fb","Type":"ContainerDied","Data":"2be8d263d74396cd579ec25b6078d659548818ef985a10cf8eb441d9d7ae2978"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.432976 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" event={"ID":"cda9500b-96aa-457f-b588-cb2efd9f36e9","Type":"ContainerStarted","Data":"015a5b2687f70ea81304fa5391c2af4e4c3411c63be91012bb9871ad1ba4e411"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.440346 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" event={"ID":"1b7f3bd6-78e8-46b2-ae10-631575d200ec","Type":"ContainerStarted","Data":"f2cbbcb041fa005f6449e66f8e3d99d06bea1c6d6ac01c4a6ecb8d83e2ba3a1f"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.440432 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" event={"ID":"1b7f3bd6-78e8-46b2-ae10-631575d200ec","Type":"ContainerStarted","Data":"8c90b63fd7e64cd9eb36ff84c4292a0765a468b700b35f6836d08bd9e992b1c3"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.441772 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.443969 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:39.943946789 +0000 UTC m=+163.906483835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.462491 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" podStartSLOduration=139.462469505 podStartE2EDuration="2m19.462469505s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.425137926 +0000 UTC m=+163.387674892" watchObservedRunningTime="2025-12-04 15:38:39.462469505 +0000 UTC m=+163.425006461" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.471374 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" event={"ID":"8025a3f7-bab8-4787-bada-09aceb2e001b","Type":"ContainerStarted","Data":"48498ec183f4de30c3bb283c507ca4985e52210e40c4d5a4ea39934390c7c762"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.518088 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" event={"ID":"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba","Type":"ContainerStarted","Data":"3cd1fb3492bc2e669bc23433cfb6d3ae3a64e038d6202af6e626359d09089da8"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.535781 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-cpnks" event={"ID":"d181976c-fe3f-40f0-a8e4-5b1774143896","Type":"ContainerStarted","Data":"b5370ba227b3a3e0c1599a55e2b1a7c3191bce279845c6e1603bf3a460fe4ec3"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.545585 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.547149 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.047124977 +0000 UTC m=+164.009661933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.551541 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cjht8" event={"ID":"2ed08c9a-e799-4301-a2f1-fec6ffa81c45","Type":"ContainerStarted","Data":"890d0c0577e06c96cdb9cf5f18181f49f29d62556c579cd58328c5bcd53f955c"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.555833 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" podStartSLOduration=140.55580934 podStartE2EDuration="2m20.55580934s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.527145354 +0000 UTC m=+163.489682320" watchObservedRunningTime="2025-12-04 15:38:39.55580934 +0000 UTC m=+163.518346286" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.562487 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-fm9qg" podStartSLOduration=139.56245956 podStartE2EDuration="2m19.56245956s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.555222745 +0000 UTC m=+163.517759711" watchObservedRunningTime="2025-12-04 15:38:39.56245956 +0000 UTC m=+163.524996516" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.572156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" event={"ID":"6b3e5556-d548-4ffc-a8a2-7b476164f5b7","Type":"ContainerStarted","Data":"3792cd7144d3988d64099d94c66857af67d88d69498b8e809f65366bcddbad1c"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.573150 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.585506 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" event={"ID":"82f0fdc8-482e-4dde-8ecd-3607d5548331","Type":"ContainerStarted","Data":"fa7e157347acf09743ccab634fceac1dac01a63b3791bb4ef4e572cb3cdad166"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.620234 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" event={"ID":"835eb0a3-753e-44e0-8124-ac51072a4692","Type":"ContainerStarted","Data":"f9a7a63f87401f897e901942792203640705cca65f758c105af4347cf0b97df3"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.638615 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" event={"ID":"d7bd8c4d-4d65-40c4-9971-9a7677e7ad3c","Type":"ContainerStarted","Data":"461a81741c31a03d64393f20b2aaf6f9e89fe057f9f1bb6148af19192a7ef81a"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.674068 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.675466 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.17545001 +0000 UTC m=+164.137986966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.679095 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:39 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:39 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:39 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.679155 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.692767 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" event={"ID":"1acfd737-9d16-428f-b839-1f5f24a7298c","Type":"ContainerStarted","Data":"6210841fdd7c9479d75cbc42e7f801dae33d3d6569628f79eaa11b3437c1ca44"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.695304 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8d9bj" podStartSLOduration=139.695289419 podStartE2EDuration="2m19.695289419s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.637724402 +0000 UTC m=+163.600261358" watchObservedRunningTime="2025-12-04 15:38:39.695289419 +0000 UTC m=+163.657826385" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.739257 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-cpnks" podStartSLOduration=8.739241287 podStartE2EDuration="8.739241287s" podCreationTimestamp="2025-12-04 15:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.696205723 +0000 UTC m=+163.658742699" watchObservedRunningTime="2025-12-04 15:38:39.739241287 +0000 UTC m=+163.701778243" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.740501 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-dgtbf" podStartSLOduration=139.740491919 podStartE2EDuration="2m19.740491919s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.737715998 +0000 UTC m=+163.700252954" watchObservedRunningTime="2025-12-04 15:38:39.740491919 +0000 UTC m=+163.703028875" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.763842 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" event={"ID":"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e","Type":"ContainerStarted","Data":"fa29f595149daa630be9ec9e469f8431968cf8f47b8293895769714b9cedf9d0"} Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.774087 4878 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-75pd8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.774195 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.775004 4878 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4cwq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.775037 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4cwq" podUID="af8c7a67-79c2-4892-a180-ee539e48bd2b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.775909 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.776036 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.276004391 +0000 UTC m=+164.238541337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.776187 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.777846 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.277789797 +0000 UTC m=+164.240326933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.781322 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-kt85s" podStartSLOduration=139.781303377 podStartE2EDuration="2m19.781303377s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.780903357 +0000 UTC m=+163.743440333" watchObservedRunningTime="2025-12-04 15:38:39.781303377 +0000 UTC m=+163.743840333" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.785917 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-q2v7l" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.879939 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.882964 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.382934025 +0000 UTC m=+164.345471011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.895278 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zvgw5" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.951701 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" podStartSLOduration=139.951676999 podStartE2EDuration="2m19.951676999s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.869301095 +0000 UTC m=+163.831838051" watchObservedRunningTime="2025-12-04 15:38:39.951676999 +0000 UTC m=+163.914213955" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.960290 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-n8vsq" podStartSLOduration=139.96026386 podStartE2EDuration="2m19.96026386s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:39.95363585 +0000 UTC m=+163.916172806" watchObservedRunningTime="2025-12-04 15:38:39.96026386 +0000 UTC m=+163.922800816" Dec 04 15:38:39 crc kubenswrapper[4878]: I1204 15:38:39.983800 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:39 crc kubenswrapper[4878]: E1204 15:38:39.984215 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.484197494 +0000 UTC m=+164.446734450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.047247 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lgmhx" podStartSLOduration=140.047221681 podStartE2EDuration="2m20.047221681s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:40.038236091 +0000 UTC m=+164.000773047" watchObservedRunningTime="2025-12-04 15:38:40.047221681 +0000 UTC m=+164.009758637" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.086480 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.086890 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.586855878 +0000 UTC m=+164.549392834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.188765 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.189139 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.689124153 +0000 UTC m=+164.651661109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.254495 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" podStartSLOduration=140.25447041 podStartE2EDuration="2m20.25447041s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:40.253138895 +0000 UTC m=+164.215675851" watchObservedRunningTime="2025-12-04 15:38:40.25447041 +0000 UTC m=+164.217007366" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.289504 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.289969 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.78994913 +0000 UTC m=+164.752486086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.391753 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.392214 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.892191904 +0000 UTC m=+164.854728850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.492950 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.493069 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.993041702 +0000 UTC m=+164.955578658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.493296 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.493662 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:40.993653788 +0000 UTC m=+164.956190744 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.605522 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.605746 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.105705323 +0000 UTC m=+165.068242279 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.605809 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.606377 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.106370551 +0000 UTC m=+165.068907507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.681515 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:40 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:40 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:40 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.681595 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.707472 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.707932 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.207905366 +0000 UTC m=+165.170442322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.764532 4878 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7lkq5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.764613 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" podUID="e78a5905-c297-4c98-81ae-a8a194a86c37" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.787630 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" event={"ID":"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f","Type":"ContainerStarted","Data":"43592584c30ee6253aba9a740a06846315e1aa2eeb11d150c0beb51e05217e19"} Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.794653 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cjht8" event={"ID":"2ed08c9a-e799-4301-a2f1-fec6ffa81c45","Type":"ContainerStarted","Data":"b3ae7ce4153b98306e84bfed79fcd997384ed867024e7f557dd0149f54a9d93b"} Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.794840 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.807490 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" event={"ID":"6b3e5556-d548-4ffc-a8a2-7b476164f5b7","Type":"ContainerStarted","Data":"0ae2a93b69e2835f80c358f5ee7c3f7285529129ba5a9880a35c6986b62c51e6"} Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.808730 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.809278 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.309254257 +0000 UTC m=+165.271791213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.812058 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-drjfj" event={"ID":"e1fde0c6-c891-4ccf-8947-3fbdfe8c243e","Type":"ContainerStarted","Data":"c2eb5607b8bbe3fc5d4756db96cee04d10ac68a1e7b492cc26b5382f2d4d5765"} Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.818509 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" event={"ID":"9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba","Type":"ContainerStarted","Data":"5f99eb9a5b8fbc3a9a6134f1250740df7b6398c39b78c8d0a733a939b635a571"} Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.830052 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cjht8" podStartSLOduration=9.83002225 podStartE2EDuration="9.83002225s" podCreationTimestamp="2025-12-04 15:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:40.82728937 +0000 UTC m=+164.789826336" watchObservedRunningTime="2025-12-04 15:38:40.83002225 +0000 UTC m=+164.792559206" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.865311 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7lkq5" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.877756 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" podStartSLOduration=141.877731204 podStartE2EDuration="2m21.877731204s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:40.876333569 +0000 UTC m=+164.838870525" watchObservedRunningTime="2025-12-04 15:38:40.877731204 +0000 UTC m=+164.840268160" Dec 04 15:38:40 crc kubenswrapper[4878]: I1204 15:38:40.913946 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:40 crc kubenswrapper[4878]: E1204 15:38:40.916008 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.415986066 +0000 UTC m=+165.378523022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.016945 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.017430 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.517409228 +0000 UTC m=+165.479946184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.118838 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.119231 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.61920648 +0000 UTC m=+165.581743436 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.215221 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.216516 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.222414 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.223928 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.224245 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.724231576 +0000 UTC m=+165.686768532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.225786 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.327213 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.328021 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqq59\" (UniqueName: \"kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.328127 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.328369 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.329157 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.829120658 +0000 UTC m=+165.791657614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.353598 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.427733 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.427981 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e43204-f248-4d01-a9a8-9c264008e2fb" containerName="collect-profiles" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.427997 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e43204-f248-4d01-a9a8-9c264008e2fb" containerName="collect-profiles" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.428104 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e43204-f248-4d01-a9a8-9c264008e2fb" containerName="collect-profiles" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.428711 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.428801 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.429628 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpw86\" (UniqueName: \"kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86\") pod \"19e43204-f248-4d01-a9a8-9c264008e2fb\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.429776 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume\") pod \"19e43204-f248-4d01-a9a8-9c264008e2fb\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.429825 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume\") pod \"19e43204-f248-4d01-a9a8-9c264008e2fb\" (UID: \"19e43204-f248-4d01-a9a8-9c264008e2fb\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.430011 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.430043 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.430074 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.430123 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqq59\" (UniqueName: \"kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.436359 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86" (OuterVolumeSpecName: "kube-api-access-zpw86") pod "19e43204-f248-4d01-a9a8-9c264008e2fb" (UID: "19e43204-f248-4d01-a9a8-9c264008e2fb"). InnerVolumeSpecName "kube-api-access-zpw86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.436956 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume" (OuterVolumeSpecName: "config-volume") pod "19e43204-f248-4d01-a9a8-9c264008e2fb" (UID: "19e43204-f248-4d01-a9a8-9c264008e2fb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.438380 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:41.938356761 +0000 UTC m=+165.900893777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.438618 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.439350 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.443363 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "19e43204-f248-4d01-a9a8-9c264008e2fb" (UID: "19e43204-f248-4d01-a9a8-9c264008e2fb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531416 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531668 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dhhk\" (UniqueName: \"kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531746 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531767 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531815 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpw86\" (UniqueName: \"kubernetes.io/projected/19e43204-f248-4d01-a9a8-9c264008e2fb-kube-api-access-zpw86\") on node \"crc\" DevicePath \"\"" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531826 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e43204-f248-4d01-a9a8-9c264008e2fb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.531834 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/19e43204-f248-4d01-a9a8-9c264008e2fb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.531987 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.031968333 +0000 UTC m=+165.994505289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.598917 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.599994 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.608558 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.633705 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.633794 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.633845 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dhhk\" (UniqueName: \"kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.633919 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.634236 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.634324 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.134299369 +0000 UTC m=+166.096836425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.637003 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.651034 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dhhk\" (UniqueName: \"kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk\") pod \"community-operators-r676v\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.679158 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:41 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:41 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:41 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.679239 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.735550 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.735804 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.735971 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.235929678 +0000 UTC m=+166.198466634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.736178 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.736323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td45b\" (UniqueName: \"kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.736445 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.736911 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.236901752 +0000 UTC m=+166.199438708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.761600 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.765799 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqq59\" (UniqueName: \"kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59\") pod \"certified-operators-b6pdg\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.778650 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.806700 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.807990 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.826098 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.831540 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" event={"ID":"19e43204-f248-4d01-a9a8-9c264008e2fb","Type":"ContainerDied","Data":"68ff31e1ccba8f6dda94f8d3db1f599a783482db6eaebd167017b62fc67753e3"} Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.831583 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ff31e1ccba8f6dda94f8d3db1f599a783482db6eaebd167017b62fc67753e3" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.831649 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.837557 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.837785 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.837967 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.337941475 +0000 UTC m=+166.300478431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.838711 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.838920 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.839232 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.839696 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td45b\" (UniqueName: \"kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.839779 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.840171 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.340155472 +0000 UTC m=+166.302692428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.842918 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" event={"ID":"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f","Type":"ContainerStarted","Data":"c0fbb3a8e6284b717fe26370e5ce2a0c83a27bda847198912ae2b478becd2700"} Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.857009 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.869626 4878 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.892861 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td45b\" (UniqueName: \"kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b\") pod \"certified-operators-548gn\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.914400 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.941197 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.941493 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh7w6\" (UniqueName: \"kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.941715 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:41 crc kubenswrapper[4878]: I1204 15:38:41.941750 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:41 crc kubenswrapper[4878]: E1204 15:38:41.942545 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.442524909 +0000 UTC m=+166.405061865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.042818 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh7w6\" (UniqueName: \"kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.042931 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.042956 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.042979 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: E1204 15:38:42.043316 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.543302176 +0000 UTC m=+166.505839132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.044508 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.044715 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.078926 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh7w6\" (UniqueName: \"kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6\") pod \"community-operators-gtmjq\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.143699 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.143934 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:42 crc kubenswrapper[4878]: E1204 15:38:42.144607 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.644564814 +0000 UTC m=+166.607101770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.147160 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab155c5e-9187-4276-98c7-20c0d7e35f4b-metrics-certs\") pod \"network-metrics-daemon-k9k9q\" (UID: \"ab155c5e-9187-4276-98c7-20c0d7e35f4b\") " pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.171365 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.202464 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.229490 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.230289 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.238173 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.238414 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.246607 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: E1204 15:38:42.247012 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.746996393 +0000 UTC m=+166.709533349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fkp9z" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.255550 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.348982 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.349374 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.349421 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: E1204 15:38:42.349578 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-04 15:38:42.849555225 +0000 UTC m=+166.812092181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.359977 4878 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-04T15:38:41.869656459Z","Handler":null,"Name":""} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.375250 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-dlt5w" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.388024 4878 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.388067 4878 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.398062 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-k9k9q" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.434162 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.460859 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.460999 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.461074 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.461385 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.469789 4878 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.469844 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.473829 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:38:42 crc kubenswrapper[4878]: W1204 15:38:42.506163 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd3c0c75_fcf5_4ab1_a561_1fd45cbf8728.slice/crio-26137f2dc9fc715f1dba103fca3f5e89fb5d4f7c6cb0ff495908dab53639da07 WatchSource:0}: Error finding container 26137f2dc9fc715f1dba103fca3f5e89fb5d4f7c6cb0ff495908dab53639da07: Status 404 returned error can't find the container with id 26137f2dc9fc715f1dba103fca3f5e89fb5d4f7c6cb0ff495908dab53639da07 Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.517364 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.533402 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fkp9z\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.554458 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.566256 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.612678 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.616553 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 15:38:42 crc kubenswrapper[4878]: W1204 15:38:42.620733 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55ca468a_efe5_4a85_95c0_ee07fc59102f.slice/crio-402982e86ff6c8708cd4b318b1454d2e4d6d8fc59798aa32a0a0b73fc6d76ba3 WatchSource:0}: Error finding container 402982e86ff6c8708cd4b318b1454d2e4d6d8fc59798aa32a0a0b73fc6d76ba3: Status 404 returned error can't find the container with id 402982e86ff6c8708cd4b318b1454d2e4d6d8fc59798aa32a0a0b73fc6d76ba3 Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.682158 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:42 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:42 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:42 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.682266 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.727695 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-k9k9q"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.782189 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.864335 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerStarted","Data":"402982e86ff6c8708cd4b318b1454d2e4d6d8fc59798aa32a0a0b73fc6d76ba3"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.869974 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" event={"ID":"2ff8b525-fc20-4c6b-8ac5-cea0ae705c0f","Type":"ContainerStarted","Data":"0a5eb242be07fac3f0984fa045d54f263dd1596e72d26397ef7ea6767a4ca62a"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.881390 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerStarted","Data":"2f3bd6f27c8d96ea77cbf880b7ee506fa8c0014c91d3e97db17c4fc030f72209"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.883390 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerStarted","Data":"26137f2dc9fc715f1dba103fca3f5e89fb5d4f7c6cb0ff495908dab53639da07"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.885181 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" event={"ID":"ab155c5e-9187-4276-98c7-20c0d7e35f4b","Type":"ContainerStarted","Data":"5dc015f0b3f7e19579c698a1061fd42c6fabc438ae9ada01b65fdbb1fadfe053"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.894991 4878 generic.go:334] "Generic (PLEG): container finished" podID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerID="a04de338e83e4992d0011115c2afc21134fc9c45ec9c43180392ec0f428d607d" exitCode=0 Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.895044 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerDied","Data":"a04de338e83e4992d0011115c2afc21134fc9c45ec9c43180392ec0f428d607d"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.895122 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerStarted","Data":"5ff743c3cd1e07d2fb2873200b54744ff990c90211cabb08d3af819b835bbf32"} Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.896232 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 04 15:38:42 crc kubenswrapper[4878]: I1204 15:38:42.898395 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.137720 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.198486 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 04 15:38:43 crc kubenswrapper[4878]: W1204 15:38:43.215726 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26a0fa4d_3430_4477_beae_2b0fa9819756.slice/crio-ed5b7e753ba684ac5b81067bd4bb09e880a9e27aa64183d675688fb265e38297 WatchSource:0}: Error finding container ed5b7e753ba684ac5b81067bd4bb09e880a9e27aa64183d675688fb265e38297: Status 404 returned error can't find the container with id ed5b7e753ba684ac5b81067bd4bb09e880a9e27aa64183d675688fb265e38297 Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.361945 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.362405 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.369134 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.403364 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.404728 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.409209 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.418650 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.490885 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.490970 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg7z7\" (UniqueName: \"kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.491079 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.532634 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.532701 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.534562 4878 patch_prober.go:28] interesting pod/console-f9d7485db-4x82r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.534618 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4x82r" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.586423 4878 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4cwq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.586581 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-v4cwq" podUID="af8c7a67-79c2-4892-a180-ee539e48bd2b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.586423 4878 patch_prober.go:28] interesting pod/downloads-7954f5f757-v4cwq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" start-of-body= Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.586691 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-v4cwq" podUID="af8c7a67-79c2-4892-a180-ee539e48bd2b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.23:8080/\": dial tcp 10.217.0.23:8080: connect: connection refused" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.592359 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.592442 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.592491 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7z7\" (UniqueName: \"kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.593446 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.593654 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.617969 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7z7\" (UniqueName: \"kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7\") pod \"redhat-marketplace-tljkb\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.631170 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.631226 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.639136 4878 patch_prober.go:28] interesting pod/apiserver-76f77b778f-h2w2r container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]log ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]etcd ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/generic-apiserver-start-informers ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/max-in-flight-filter ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 04 15:38:43 crc kubenswrapper[4878]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 04 15:38:43 crc kubenswrapper[4878]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/project.openshift.io-projectcache ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-startinformers ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 04 15:38:43 crc kubenswrapper[4878]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 04 15:38:43 crc kubenswrapper[4878]: livez check failed Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.639204 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" podUID="9416b4fe-14c0-4bc6-8a82-e1bffb6a0dba" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.677726 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.679574 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:43 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:43 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:43 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.679659 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.726004 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.801958 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.803428 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.824531 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.896840 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.896978 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sccv\" (UniqueName: \"kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.897427 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.905208 4878 generic.go:334] "Generic (PLEG): container finished" podID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerID="5b3831ae9c8a3a4a05c4062f6f1746101b35186145818644161c68f072e32c1a" exitCode=0 Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.905314 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerDied","Data":"5b3831ae9c8a3a4a05c4062f6f1746101b35186145818644161c68f072e32c1a"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.909518 4878 generic.go:334] "Generic (PLEG): container finished" podID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerID="2e40379a402f2759314bb67ac920dc90ca13a02b621fbe93ab547fba639596b4" exitCode=0 Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.909599 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerDied","Data":"2e40379a402f2759314bb67ac920dc90ca13a02b621fbe93ab547fba639596b4"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.919098 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7e550d9-328d-48c8-b8cc-a90567e7c2d3","Type":"ContainerStarted","Data":"1c5f02707a3e97d24b1b8dd981c1d931a4a5ffa3c6957174798e07514bf499fa"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.919338 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7e550d9-328d-48c8-b8cc-a90567e7c2d3","Type":"ContainerStarted","Data":"f9320f55758b5512f1c1f0acdeb4ac8b42175e4e6c3de76a649ddc35dab1b428"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.925805 4878 generic.go:334] "Generic (PLEG): container finished" podID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerID="1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9" exitCode=0 Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.926604 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerDied","Data":"1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.929398 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" event={"ID":"26a0fa4d-3430-4477-beae-2b0fa9819756","Type":"ContainerStarted","Data":"6293ed127c808400dfe5a23c6d843a64d79e4e46a728f5aee4eb1a6dc330f4a7"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.929452 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" event={"ID":"26a0fa4d-3430-4477-beae-2b0fa9819756","Type":"ContainerStarted","Data":"ed5b7e753ba684ac5b81067bd4bb09e880a9e27aa64183d675688fb265e38297"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.930146 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.942278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" event={"ID":"ab155c5e-9187-4276-98c7-20c0d7e35f4b","Type":"ContainerStarted","Data":"a7cb077611232e9a2516ca6f6b6ef98e2cb593090277679b8b4b523de6f7ecdf"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.942344 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-k9k9q" event={"ID":"ab155c5e-9187-4276-98c7-20c0d7e35f4b","Type":"ContainerStarted","Data":"fe9b39ab72ff05937c7b6e8b3ef0165ba14141b40554e7ec4aa36386480b95bd"} Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.948722 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tvshr" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.961672 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.961629736 podStartE2EDuration="1.961629736s" podCreationTimestamp="2025-12-04 15:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:43.957022198 +0000 UTC m=+167.919559174" watchObservedRunningTime="2025-12-04 15:38:43.961629736 +0000 UTC m=+167.924166692" Dec 04 15:38:43 crc kubenswrapper[4878]: I1204 15:38:43.977572 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:43.999079 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: W1204 15:38:43.999087 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84a615f4_5e6f_4d2e_86b6_59453037cd11.slice/crio-5d8e6c02f67bfe73fb093591f9481b413012f45b3f19c3f26b4c4dae2605492c WatchSource:0}: Error finding container 5d8e6c02f67bfe73fb093591f9481b413012f45b3f19c3f26b4c4dae2605492c: Status 404 returned error can't find the container with id 5d8e6c02f67bfe73fb093591f9481b413012f45b3f19c3f26b4c4dae2605492c Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:43.999181 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sccv\" (UniqueName: \"kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:43.999277 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:43.999910 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:43.999952 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.002139 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" podStartSLOduration=144.002117405 podStartE2EDuration="2m24.002117405s" podCreationTimestamp="2025-12-04 15:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:44.001455108 +0000 UTC m=+167.963992084" watchObservedRunningTime="2025-12-04 15:38:44.002117405 +0000 UTC m=+167.964654371" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.020131 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sccv\" (UniqueName: \"kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv\") pod \"redhat-marketplace-znw6m\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.126863 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.401125 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8dtmz" podStartSLOduration=13.401103264 podStartE2EDuration="13.401103264s" podCreationTimestamp="2025-12-04 15:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:44.399306918 +0000 UTC m=+168.361843884" watchObservedRunningTime="2025-12-04 15:38:44.401103264 +0000 UTC m=+168.363640220" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.452522 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.454416 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.454862 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.457473 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-k9k9q" podStartSLOduration=145.45744215 podStartE2EDuration="2m25.45744215s" podCreationTimestamp="2025-12-04 15:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:38:44.435111047 +0000 UTC m=+168.397648003" watchObservedRunningTime="2025-12-04 15:38:44.45744215 +0000 UTC m=+168.419979106" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.457955 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.608663 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.608752 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.608815 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmcj9\" (UniqueName: \"kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.647537 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.678680 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:44 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:44 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:44 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.678743 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.711121 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmcj9\" (UniqueName: \"kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.711229 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.712005 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.715559 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.718287 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.736458 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmcj9\" (UniqueName: \"kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9\") pod \"redhat-operators-r8m2c\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.770453 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.800270 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.806837 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.811150 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.935378 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.957186 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.957262 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.957351 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f75r7\" (UniqueName: \"kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.958759 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerStarted","Data":"41fb2ded504627709336a27752111d83562121043fff104fe2c1afb9dd12d6f6"} Dec 04 15:38:44 crc kubenswrapper[4878]: I1204 15:38:44.960941 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerStarted","Data":"5d8e6c02f67bfe73fb093591f9481b413012f45b3f19c3f26b4c4dae2605492c"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.029319 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:38:45 crc kubenswrapper[4878]: W1204 15:38:45.035380 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd556355d_6041_4606_8551_c3642a5a57b4.slice/crio-6ef760090ac45187e8c3401feba793fc1184a111447e0b027865c4a857e80083 WatchSource:0}: Error finding container 6ef760090ac45187e8c3401feba793fc1184a111447e0b027865c4a857e80083: Status 404 returned error can't find the container with id 6ef760090ac45187e8c3401feba793fc1184a111447e0b027865c4a857e80083 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.059982 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f75r7\" (UniqueName: \"kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.060186 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.060213 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.062781 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.062839 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.079983 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f75r7\" (UniqueName: \"kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7\") pod \"redhat-operators-zm6l5\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.177074 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.383329 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:38:45 crc kubenswrapper[4878]: W1204 15:38:45.405037 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf8b496cc_fe4a_4308_91fd_0a7f61f1ba76.slice/crio-e2249f9ba7ca46a3b459831648cd4ef0c23f0c03d5b199de7ca6add40f05e143 WatchSource:0}: Error finding container e2249f9ba7ca46a3b459831648cd4ef0c23f0c03d5b199de7ca6add40f05e143: Status 404 returned error can't find the container with id e2249f9ba7ca46a3b459831648cd4ef0c23f0c03d5b199de7ca6add40f05e143 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.682434 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:45 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:45 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:45 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.682586 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.971334 4878 generic.go:334] "Generic (PLEG): container finished" podID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerID="1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2" exitCode=0 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.971400 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerDied","Data":"1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.973910 4878 generic.go:334] "Generic (PLEG): container finished" podID="d556355d-6041-4606-8551-c3642a5a57b4" containerID="0c262e0afc02ad99691808c6e7619c6532cbd733e359c4a0b22ddc1a76cdbdb0" exitCode=0 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.973995 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerDied","Data":"0c262e0afc02ad99691808c6e7619c6532cbd733e359c4a0b22ddc1a76cdbdb0"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.974042 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerStarted","Data":"6ef760090ac45187e8c3401feba793fc1184a111447e0b027865c4a857e80083"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.977731 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerStarted","Data":"cced5a45e4ada658b7964e6545dcb53615f98e0b05f99f32d791e403855fcaa4"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.977772 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerStarted","Data":"e2249f9ba7ca46a3b459831648cd4ef0c23f0c03d5b199de7ca6add40f05e143"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.983487 4878 generic.go:334] "Generic (PLEG): container finished" podID="e672441e-4dcc-48e6-8d64-409e4c213051" containerID="e72ae4ff882465c2043a5737cbfeaa4d37e69be060751d24c3ac16f346300d29" exitCode=0 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.983560 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerDied","Data":"e72ae4ff882465c2043a5737cbfeaa4d37e69be060751d24c3ac16f346300d29"} Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.985572 4878 generic.go:334] "Generic (PLEG): container finished" podID="e7e550d9-328d-48c8-b8cc-a90567e7c2d3" containerID="1c5f02707a3e97d24b1b8dd981c1d931a4a5ffa3c6957174798e07514bf499fa" exitCode=0 Dec 04 15:38:45 crc kubenswrapper[4878]: I1204 15:38:45.985669 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7e550d9-328d-48c8-b8cc-a90567e7c2d3","Type":"ContainerDied","Data":"1c5f02707a3e97d24b1b8dd981c1d931a4a5ffa3c6957174798e07514bf499fa"} Dec 04 15:38:46 crc kubenswrapper[4878]: I1204 15:38:46.679146 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:46 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:46 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:46 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:46 crc kubenswrapper[4878]: I1204 15:38:46.679220 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:46 crc kubenswrapper[4878]: I1204 15:38:46.997330 4878 generic.go:334] "Generic (PLEG): container finished" podID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerID="cced5a45e4ada658b7964e6545dcb53615f98e0b05f99f32d791e403855fcaa4" exitCode=0 Dec 04 15:38:46 crc kubenswrapper[4878]: I1204 15:38:46.997755 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerDied","Data":"cced5a45e4ada658b7964e6545dcb53615f98e0b05f99f32d791e403855fcaa4"} Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.255554 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.256847 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.257303 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.261398 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.261542 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.273837 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.400589 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access\") pod \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.400664 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir\") pod \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\" (UID: \"e7e550d9-328d-48c8-b8cc-a90567e7c2d3\") " Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.400810 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.400911 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.401020 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7e550d9-328d-48c8-b8cc-a90567e7c2d3" (UID: "e7e550d9-328d-48c8-b8cc-a90567e7c2d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.410158 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e550d9-328d-48c8-b8cc-a90567e7c2d3" (UID: "e7e550d9-328d-48c8-b8cc-a90567e7c2d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.502026 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.502133 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.502221 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.502236 4878 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7e550d9-328d-48c8-b8cc-a90567e7c2d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.502286 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.518516 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.605578 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.678634 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:47 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:47 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:47 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.678705 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:47 crc kubenswrapper[4878]: I1204 15:38:47.777180 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 04 15:38:47 crc kubenswrapper[4878]: W1204 15:38:47.793306 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod42ea09df_dc93_485c_8d38_a85694e3d980.slice/crio-0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96 WatchSource:0}: Error finding container 0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96: Status 404 returned error can't find the container with id 0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96 Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.006605 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"42ea09df-dc93-485c-8d38-a85694e3d980","Type":"ContainerStarted","Data":"0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96"} Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.009352 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e7e550d9-328d-48c8-b8cc-a90567e7c2d3","Type":"ContainerDied","Data":"f9320f55758b5512f1c1f0acdeb4ac8b42175e4e6c3de76a649ddc35dab1b428"} Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.009391 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9320f55758b5512f1c1f0acdeb4ac8b42175e4e6c3de76a649ddc35dab1b428" Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.009457 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.636859 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.641578 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-h2w2r" Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.698404 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:48 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:48 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:48 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:48 crc kubenswrapper[4878]: I1204 15:38:48.698763 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:49 crc kubenswrapper[4878]: I1204 15:38:49.679661 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:49 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:49 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:49 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:49 crc kubenswrapper[4878]: I1204 15:38:49.679754 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:50 crc kubenswrapper[4878]: I1204 15:38:50.105016 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cjht8" Dec 04 15:38:50 crc kubenswrapper[4878]: I1204 15:38:50.678977 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:50 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:50 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:50 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:50 crc kubenswrapper[4878]: I1204 15:38:50.679084 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:51 crc kubenswrapper[4878]: I1204 15:38:51.029846 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"42ea09df-dc93-485c-8d38-a85694e3d980","Type":"ContainerStarted","Data":"72f78d5b4517d147b796ef024bde7f0ab4c852d650445f3c022f92491487a09d"} Dec 04 15:38:51 crc kubenswrapper[4878]: I1204 15:38:51.678098 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:51 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:51 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:51 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:51 crc kubenswrapper[4878]: I1204 15:38:51.678160 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.040127 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-kdhmk_cda9500b-96aa-457f-b588-cb2efd9f36e9/cluster-samples-operator/0.log" Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.040214 4878 generic.go:334] "Generic (PLEG): container finished" podID="cda9500b-96aa-457f-b588-cb2efd9f36e9" containerID="1c266b8a07571f206f8684f423d6e8e77c1588a173efe17be92c6d69967d1d56" exitCode=2 Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.040277 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" event={"ID":"cda9500b-96aa-457f-b588-cb2efd9f36e9","Type":"ContainerDied","Data":"1c266b8a07571f206f8684f423d6e8e77c1588a173efe17be92c6d69967d1d56"} Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.041058 4878 scope.go:117] "RemoveContainer" containerID="1c266b8a07571f206f8684f423d6e8e77c1588a173efe17be92c6d69967d1d56" Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.678897 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:52 crc kubenswrapper[4878]: [-]has-synced failed: reason withheld Dec 04 15:38:52 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:52 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:52 crc kubenswrapper[4878]: I1204 15:38:52.679190 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.053451 4878 generic.go:334] "Generic (PLEG): container finished" podID="42ea09df-dc93-485c-8d38-a85694e3d980" containerID="72f78d5b4517d147b796ef024bde7f0ab4c852d650445f3c022f92491487a09d" exitCode=0 Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.053503 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"42ea09df-dc93-485c-8d38-a85694e3d980","Type":"ContainerDied","Data":"72f78d5b4517d147b796ef024bde7f0ab4c852d650445f3c022f92491487a09d"} Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.533070 4878 patch_prober.go:28] interesting pod/console-f9d7485db-4x82r container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.533145 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4x82r" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.590862 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-v4cwq" Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.689118 4878 patch_prober.go:28] interesting pod/router-default-5444994796-pwnk4 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 04 15:38:53 crc kubenswrapper[4878]: [+]has-synced ok Dec 04 15:38:53 crc kubenswrapper[4878]: [+]process-running ok Dec 04 15:38:53 crc kubenswrapper[4878]: healthz check failed Dec 04 15:38:53 crc kubenswrapper[4878]: I1204 15:38:53.689199 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-pwnk4" podUID="199d51ae-0d72-4e64-a8eb-546c07076c21" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:38:54 crc kubenswrapper[4878]: I1204 15:38:54.680976 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:54 crc kubenswrapper[4878]: I1204 15:38:54.684287 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-pwnk4" Dec 04 15:38:55 crc kubenswrapper[4878]: I1204 15:38:55.071014 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-kdhmk_cda9500b-96aa-457f-b588-cb2efd9f36e9/cluster-samples-operator/0.log" Dec 04 15:38:55 crc kubenswrapper[4878]: I1204 15:38:55.071230 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-kdhmk" event={"ID":"cda9500b-96aa-457f-b588-cb2efd9f36e9","Type":"ContainerStarted","Data":"666e7c251a7e2a9686b78c3816fbff543d16df5361b3947617d42d359b257dd8"} Dec 04 15:39:00 crc kubenswrapper[4878]: I1204 15:39:00.840771 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:39:00 crc kubenswrapper[4878]: I1204 15:39:00.841385 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.200238 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.220391 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access\") pod \"42ea09df-dc93-485c-8d38-a85694e3d980\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.220464 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir\") pod \"42ea09df-dc93-485c-8d38-a85694e3d980\" (UID: \"42ea09df-dc93-485c-8d38-a85694e3d980\") " Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.221015 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "42ea09df-dc93-485c-8d38-a85694e3d980" (UID: "42ea09df-dc93-485c-8d38-a85694e3d980"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.233088 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "42ea09df-dc93-485c-8d38-a85694e3d980" (UID: "42ea09df-dc93-485c-8d38-a85694e3d980"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.321519 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/42ea09df-dc93-485c-8d38-a85694e3d980-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.321561 4878 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/42ea09df-dc93-485c-8d38-a85694e3d980-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:02 crc kubenswrapper[4878]: I1204 15:39:02.790252 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:39:03 crc kubenswrapper[4878]: I1204 15:39:03.120786 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"42ea09df-dc93-485c-8d38-a85694e3d980","Type":"ContainerDied","Data":"0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96"} Dec 04 15:39:03 crc kubenswrapper[4878]: I1204 15:39:03.120837 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b863cfccb436e9100a6726f6b939a2ce1aec65f2d4aaaf7510c3602dad11d96" Dec 04 15:39:03 crc kubenswrapper[4878]: I1204 15:39:03.120918 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 04 15:39:03 crc kubenswrapper[4878]: I1204 15:39:03.536269 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:39:03 crc kubenswrapper[4878]: I1204 15:39:03.539889 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:39:04 crc kubenswrapper[4878]: I1204 15:39:04.198932 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 04 15:39:14 crc kubenswrapper[4878]: I1204 15:39:14.954190 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6lvdj" Dec 04 15:39:18 crc kubenswrapper[4878]: E1204 15:39:18.804791 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 04 15:39:18 crc kubenswrapper[4878]: E1204 15:39:18.805020 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td45b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-548gn_openshift-marketplace(b188cb9c-20d6-438e-b53a-a8207d1dedab): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:18 crc kubenswrapper[4878]: E1204 15:39:18.806200 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-548gn" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.857821 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 15:39:20 crc kubenswrapper[4878]: E1204 15:39:20.858459 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42ea09df-dc93-485c-8d38-a85694e3d980" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.858477 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="42ea09df-dc93-485c-8d38-a85694e3d980" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: E1204 15:39:20.858495 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e550d9-328d-48c8-b8cc-a90567e7c2d3" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.858503 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e550d9-328d-48c8-b8cc-a90567e7c2d3" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.858637 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e550d9-328d-48c8-b8cc-a90567e7c2d3" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.858657 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="42ea09df-dc93-485c-8d38-a85694e3d980" containerName="pruner" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.859152 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.863272 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.863667 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.869200 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.912348 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:20 crc kubenswrapper[4878]: I1204 15:39:20.912604 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:21 crc kubenswrapper[4878]: I1204 15:39:21.014230 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:21 crc kubenswrapper[4878]: I1204 15:39:21.014354 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:21 crc kubenswrapper[4878]: I1204 15:39:21.014431 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:21 crc kubenswrapper[4878]: I1204 15:39:21.037577 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:21 crc kubenswrapper[4878]: I1204 15:39:21.177306 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.443047 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.444405 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.451562 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.476041 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.476135 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.476538 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.578451 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.578515 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.578576 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.578608 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.578670 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.596993 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access\") pod \"installer-9-crc\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:25 crc kubenswrapper[4878]: I1204 15:39:25.776388 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:39:27 crc kubenswrapper[4878]: E1204 15:39:27.181160 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 15:39:27 crc kubenswrapper[4878]: E1204 15:39:27.181333 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sccv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-znw6m_openshift-marketplace(e672441e-4dcc-48e6-8d64-409e4c213051): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:27 crc kubenswrapper[4878]: E1204 15:39:27.182555 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-znw6m" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" Dec 04 15:39:30 crc kubenswrapper[4878]: I1204 15:39:30.840642 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:39:30 crc kubenswrapper[4878]: I1204 15:39:30.841024 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:39:30 crc kubenswrapper[4878]: I1204 15:39:30.841093 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:39:30 crc kubenswrapper[4878]: I1204 15:39:30.841835 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:39:30 crc kubenswrapper[4878]: I1204 15:39:30.841965 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d" gracePeriod=600 Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.482955 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.483362 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5dhhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r676v_openshift-marketplace(3801d81c-ca75-43a3-a612-71d2d97517a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.484949 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r676v" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.489563 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.489722 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqq59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-b6pdg_openshift-marketplace(cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:33 crc kubenswrapper[4878]: E1204 15:39:33.490866 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-b6pdg" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" Dec 04 15:39:34 crc kubenswrapper[4878]: I1204 15:39:34.292855 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d" exitCode=0 Dec 04 15:39:34 crc kubenswrapper[4878]: I1204 15:39:34.293082 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d"} Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.478198 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r676v" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.480713 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-b6pdg" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.490090 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.490379 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qg7z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-tljkb_openshift-marketplace(84a615f4-5e6f-4d2e-86b6-59453037cd11): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.491649 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-tljkb" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.512428 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.512590 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f75r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zm6l5_openshift-marketplace(f8b496cc-fe4a-4308-91fd-0a7f61f1ba76): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.514644 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zm6l5" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.522028 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.522171 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmcj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-r8m2c_openshift-marketplace(d556355d-6041-4606-8551-c3642a5a57b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.523477 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-r8m2c" podUID="d556355d-6041-4606-8551-c3642a5a57b4" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.566160 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.566562 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fh7w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gtmjq_openshift-marketplace(55ca468a-efe5-4a85-95c0-ee07fc59102f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 04 15:39:36 crc kubenswrapper[4878]: E1204 15:39:36.567810 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gtmjq" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" Dec 04 15:39:36 crc kubenswrapper[4878]: I1204 15:39:36.866943 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 04 15:39:36 crc kubenswrapper[4878]: W1204 15:39:36.873134 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod70ac35b6_291f_4c1e_af49_7cb620da5ca1.slice/crio-2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce WatchSource:0}: Error finding container 2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce: Status 404 returned error can't find the container with id 2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce Dec 04 15:39:36 crc kubenswrapper[4878]: I1204 15:39:36.915148 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 04 15:39:36 crc kubenswrapper[4878]: W1204 15:39:36.927896 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod659bb95e_9292_4b23_a76e_3cb3b4d24a23.slice/crio-36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e WatchSource:0}: Error finding container 36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e: Status 404 returned error can't find the container with id 36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.310675 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60"} Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.312864 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"659bb95e-9292-4b23-a76e-3cb3b4d24a23","Type":"ContainerStarted","Data":"8c64a69a2673618b7e8044f172d0a9e5988548c91b4b037ab96684764012d863"} Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.312936 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"659bb95e-9292-4b23-a76e-3cb3b4d24a23","Type":"ContainerStarted","Data":"36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e"} Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.315336 4878 generic.go:334] "Generic (PLEG): container finished" podID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerID="804584bc8a4cd547b1ecb41efd56862ca23113ec36d8624478ba2b74e3be3f5d" exitCode=0 Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.315446 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerDied","Data":"804584bc8a4cd547b1ecb41efd56862ca23113ec36d8624478ba2b74e3be3f5d"} Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.318059 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70ac35b6-291f-4c1e-af49-7cb620da5ca1","Type":"ContainerStarted","Data":"e185dde9a772afb28a6549a87e4e04e63c2a1ade00877f8ff553a2c22898f342"} Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.318086 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70ac35b6-291f-4c1e-af49-7cb620da5ca1","Type":"ContainerStarted","Data":"2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce"} Dec 04 15:39:37 crc kubenswrapper[4878]: E1204 15:39:37.319322 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gtmjq" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" Dec 04 15:39:37 crc kubenswrapper[4878]: E1204 15:39:37.321292 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zm6l5" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.421255 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.421227008 podStartE2EDuration="12.421227008s" podCreationTimestamp="2025-12-04 15:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:39:37.398969484 +0000 UTC m=+221.361506440" watchObservedRunningTime="2025-12-04 15:39:37.421227008 +0000 UTC m=+221.383763974" Dec 04 15:39:37 crc kubenswrapper[4878]: I1204 15:39:37.422311 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=17.422304975 podStartE2EDuration="17.422304975s" podCreationTimestamp="2025-12-04 15:39:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:39:37.419238108 +0000 UTC m=+221.381775064" watchObservedRunningTime="2025-12-04 15:39:37.422304975 +0000 UTC m=+221.384841931" Dec 04 15:39:38 crc kubenswrapper[4878]: I1204 15:39:38.342082 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerStarted","Data":"2e0d355e43c57b684ecadbe37dc77fa77cf22e90d7923bbb1b1d255639d69cf3"} Dec 04 15:39:38 crc kubenswrapper[4878]: I1204 15:39:38.356615 4878 generic.go:334] "Generic (PLEG): container finished" podID="659bb95e-9292-4b23-a76e-3cb3b4d24a23" containerID="8c64a69a2673618b7e8044f172d0a9e5988548c91b4b037ab96684764012d863" exitCode=0 Dec 04 15:39:38 crc kubenswrapper[4878]: I1204 15:39:38.357156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"659bb95e-9292-4b23-a76e-3cb3b4d24a23","Type":"ContainerDied","Data":"8c64a69a2673618b7e8044f172d0a9e5988548c91b4b037ab96684764012d863"} Dec 04 15:39:38 crc kubenswrapper[4878]: I1204 15:39:38.377102 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-548gn" podStartSLOduration=3.552565698 podStartE2EDuration="57.377072775s" podCreationTimestamp="2025-12-04 15:38:41 +0000 UTC" firstStartedPulling="2025-12-04 15:38:43.913623384 +0000 UTC m=+167.876160350" lastFinishedPulling="2025-12-04 15:39:37.738130471 +0000 UTC m=+221.700667427" observedRunningTime="2025-12-04 15:39:38.371964828 +0000 UTC m=+222.334501794" watchObservedRunningTime="2025-12-04 15:39:38.377072775 +0000 UTC m=+222.339609731" Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.593796 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.701651 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access\") pod \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.701865 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir\") pod \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\" (UID: \"659bb95e-9292-4b23-a76e-3cb3b4d24a23\") " Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.702028 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "659bb95e-9292-4b23-a76e-3cb3b4d24a23" (UID: "659bb95e-9292-4b23-a76e-3cb3b4d24a23"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.702425 4878 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.707458 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "659bb95e-9292-4b23-a76e-3cb3b4d24a23" (UID: "659bb95e-9292-4b23-a76e-3cb3b4d24a23"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:39:39 crc kubenswrapper[4878]: I1204 15:39:39.803890 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/659bb95e-9292-4b23-a76e-3cb3b4d24a23-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:40 crc kubenswrapper[4878]: I1204 15:39:40.372101 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"659bb95e-9292-4b23-a76e-3cb3b4d24a23","Type":"ContainerDied","Data":"36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e"} Dec 04 15:39:40 crc kubenswrapper[4878]: I1204 15:39:40.372460 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d50cbc1b17023b174b71e8a4ca9a8946b4d149f9ad5ce6187883d9c1c0666e" Dec 04 15:39:40 crc kubenswrapper[4878]: I1204 15:39:40.372529 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 04 15:39:41 crc kubenswrapper[4878]: I1204 15:39:41.380813 4878 generic.go:334] "Generic (PLEG): container finished" podID="e672441e-4dcc-48e6-8d64-409e4c213051" containerID="07841b0c314ca7f67563ac90702bc72ca15184c97f549b73266ce77f13185824" exitCode=0 Dec 04 15:39:41 crc kubenswrapper[4878]: I1204 15:39:41.380969 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerDied","Data":"07841b0c314ca7f67563ac90702bc72ca15184c97f549b73266ce77f13185824"} Dec 04 15:39:41 crc kubenswrapper[4878]: I1204 15:39:41.915644 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:41 crc kubenswrapper[4878]: I1204 15:39:41.915706 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:41 crc kubenswrapper[4878]: I1204 15:39:41.986251 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:42 crc kubenswrapper[4878]: I1204 15:39:42.390025 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerStarted","Data":"4cbee050a65758249604ed549d44e0a60ce17e6c67c0e1efccc79d1a6eafa1eb"} Dec 04 15:39:42 crc kubenswrapper[4878]: I1204 15:39:42.416258 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-znw6m" podStartSLOduration=4.628123592 podStartE2EDuration="59.416232865s" podCreationTimestamp="2025-12-04 15:38:43 +0000 UTC" firstStartedPulling="2025-12-04 15:38:46.999044444 +0000 UTC m=+170.961581400" lastFinishedPulling="2025-12-04 15:39:41.787153717 +0000 UTC m=+225.749690673" observedRunningTime="2025-12-04 15:39:42.411380694 +0000 UTC m=+226.373917670" watchObservedRunningTime="2025-12-04 15:39:42.416232865 +0000 UTC m=+226.378769821" Dec 04 15:39:42 crc kubenswrapper[4878]: I1204 15:39:42.432401 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:43 crc kubenswrapper[4878]: I1204 15:39:43.167042 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:39:44 crc kubenswrapper[4878]: I1204 15:39:44.127888 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:39:44 crc kubenswrapper[4878]: I1204 15:39:44.128545 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:39:44 crc kubenswrapper[4878]: I1204 15:39:44.172016 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:39:44 crc kubenswrapper[4878]: I1204 15:39:44.399515 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-548gn" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="registry-server" containerID="cri-o://2e0d355e43c57b684ecadbe37dc77fa77cf22e90d7923bbb1b1d255639d69cf3" gracePeriod=2 Dec 04 15:39:46 crc kubenswrapper[4878]: I1204 15:39:46.427628 4878 generic.go:334] "Generic (PLEG): container finished" podID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerID="2e0d355e43c57b684ecadbe37dc77fa77cf22e90d7923bbb1b1d255639d69cf3" exitCode=0 Dec 04 15:39:46 crc kubenswrapper[4878]: I1204 15:39:46.427740 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerDied","Data":"2e0d355e43c57b684ecadbe37dc77fa77cf22e90d7923bbb1b1d255639d69cf3"} Dec 04 15:39:46 crc kubenswrapper[4878]: I1204 15:39:46.900502 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.014139 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td45b\" (UniqueName: \"kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b\") pod \"b188cb9c-20d6-438e-b53a-a8207d1dedab\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.014234 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content\") pod \"b188cb9c-20d6-438e-b53a-a8207d1dedab\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.014317 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities\") pod \"b188cb9c-20d6-438e-b53a-a8207d1dedab\" (UID: \"b188cb9c-20d6-438e-b53a-a8207d1dedab\") " Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.015244 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities" (OuterVolumeSpecName: "utilities") pod "b188cb9c-20d6-438e-b53a-a8207d1dedab" (UID: "b188cb9c-20d6-438e-b53a-a8207d1dedab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.020144 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b" (OuterVolumeSpecName: "kube-api-access-td45b") pod "b188cb9c-20d6-438e-b53a-a8207d1dedab" (UID: "b188cb9c-20d6-438e-b53a-a8207d1dedab"). InnerVolumeSpecName "kube-api-access-td45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.068493 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b188cb9c-20d6-438e-b53a-a8207d1dedab" (UID: "b188cb9c-20d6-438e-b53a-a8207d1dedab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.116069 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.116117 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td45b\" (UniqueName: \"kubernetes.io/projected/b188cb9c-20d6-438e-b53a-a8207d1dedab-kube-api-access-td45b\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.116130 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b188cb9c-20d6-438e-b53a-a8207d1dedab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.437442 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-548gn" event={"ID":"b188cb9c-20d6-438e-b53a-a8207d1dedab","Type":"ContainerDied","Data":"2f3bd6f27c8d96ea77cbf880b7ee506fa8c0014c91d3e97db17c4fc030f72209"} Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.437527 4878 scope.go:117] "RemoveContainer" containerID="2e0d355e43c57b684ecadbe37dc77fa77cf22e90d7923bbb1b1d255639d69cf3" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.437551 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-548gn" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.458028 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.462205 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-548gn"] Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.462911 4878 scope.go:117] "RemoveContainer" containerID="804584bc8a4cd547b1ecb41efd56862ca23113ec36d8624478ba2b74e3be3f5d" Dec 04 15:39:47 crc kubenswrapper[4878]: I1204 15:39:47.482175 4878 scope.go:117] "RemoveContainer" containerID="2e40379a402f2759314bb67ac920dc90ca13a02b621fbe93ab547fba639596b4" Dec 04 15:39:48 crc kubenswrapper[4878]: I1204 15:39:48.445249 4878 generic.go:334] "Generic (PLEG): container finished" podID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerID="8bd7915e2de32061e59101b696d26cff29be3d28bc29447ab1ba1f82226ba173" exitCode=0 Dec 04 15:39:48 crc kubenswrapper[4878]: I1204 15:39:48.445626 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerDied","Data":"8bd7915e2de32061e59101b696d26cff29be3d28bc29447ab1ba1f82226ba173"} Dec 04 15:39:49 crc kubenswrapper[4878]: I1204 15:39:49.186793 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" path="/var/lib/kubelet/pods/b188cb9c-20d6-438e-b53a-a8207d1dedab/volumes" Dec 04 15:39:50 crc kubenswrapper[4878]: I1204 15:39:50.460626 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerStarted","Data":"7af35cfdbf0cd78332293733b5c5feb30015cb724beec17e3d86667c478632b6"} Dec 04 15:39:50 crc kubenswrapper[4878]: I1204 15:39:50.481890 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r676v" podStartSLOduration=2.184580979 podStartE2EDuration="1m9.481852019s" podCreationTimestamp="2025-12-04 15:38:41 +0000 UTC" firstStartedPulling="2025-12-04 15:38:42.89802121 +0000 UTC m=+166.860558176" lastFinishedPulling="2025-12-04 15:39:50.19529226 +0000 UTC m=+234.157829216" observedRunningTime="2025-12-04 15:39:50.478085035 +0000 UTC m=+234.440621991" watchObservedRunningTime="2025-12-04 15:39:50.481852019 +0000 UTC m=+234.444388975" Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.469037 4878 generic.go:334] "Generic (PLEG): container finished" podID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerID="cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0" exitCode=0 Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.469131 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerDied","Data":"cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0"} Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.474333 4878 generic.go:334] "Generic (PLEG): container finished" podID="d556355d-6041-4606-8551-c3642a5a57b4" containerID="e52ee4edaad15e663d4783e8ab1bd3489fcea0248d556bcf04cecac6a1c40b8b" exitCode=0 Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.474380 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerDied","Data":"e52ee4edaad15e663d4783e8ab1bd3489fcea0248d556bcf04cecac6a1c40b8b"} Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.779335 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:39:51 crc kubenswrapper[4878]: I1204 15:39:51.780300 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.482163 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerStarted","Data":"1394b19fe54734102ca7f4b11c5b5bdaeb5635eecc97e17d1b1d717529526a7a"} Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.484629 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerStarted","Data":"10a9de15cb60a9061813d568a9417ad2c5383d1c5a45ddd476e5e3471c64ebfd"} Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.487944 4878 generic.go:334] "Generic (PLEG): container finished" podID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerID="7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d" exitCode=0 Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.488474 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerDied","Data":"7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d"} Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.507451 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-r8m2c" podStartSLOduration=3.597350889 podStartE2EDuration="1m8.507431789s" podCreationTimestamp="2025-12-04 15:38:44 +0000 UTC" firstStartedPulling="2025-12-04 15:38:46.999501876 +0000 UTC m=+170.962038832" lastFinishedPulling="2025-12-04 15:39:51.909582776 +0000 UTC m=+235.872119732" observedRunningTime="2025-12-04 15:39:52.505672165 +0000 UTC m=+236.468209131" watchObservedRunningTime="2025-12-04 15:39:52.507431789 +0000 UTC m=+236.469968745" Dec 04 15:39:52 crc kubenswrapper[4878]: I1204 15:39:52.817206 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-r676v" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="registry-server" probeResult="failure" output=< Dec 04 15:39:52 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 15:39:52 crc kubenswrapper[4878]: > Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.497118 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerStarted","Data":"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8"} Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.499230 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerStarted","Data":"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d"} Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.502281 4878 generic.go:334] "Generic (PLEG): container finished" podID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerID="10a9de15cb60a9061813d568a9417ad2c5383d1c5a45ddd476e5e3471c64ebfd" exitCode=0 Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.502318 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerDied","Data":"10a9de15cb60a9061813d568a9417ad2c5383d1c5a45ddd476e5e3471c64ebfd"} Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.550725 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6pdg" podStartSLOduration=4.35170321 podStartE2EDuration="1m12.550695511s" podCreationTimestamp="2025-12-04 15:38:41 +0000 UTC" firstStartedPulling="2025-12-04 15:38:43.92826829 +0000 UTC m=+167.890805246" lastFinishedPulling="2025-12-04 15:39:52.127260591 +0000 UTC m=+236.089797547" observedRunningTime="2025-12-04 15:39:53.52816569 +0000 UTC m=+237.490702666" watchObservedRunningTime="2025-12-04 15:39:53.550695511 +0000 UTC m=+237.513232467" Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.567919 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tljkb" podStartSLOduration=4.686496436 podStartE2EDuration="1m10.567893598s" podCreationTimestamp="2025-12-04 15:38:43 +0000 UTC" firstStartedPulling="2025-12-04 15:38:47.000100231 +0000 UTC m=+170.962637207" lastFinishedPulling="2025-12-04 15:39:52.881497413 +0000 UTC m=+236.844034369" observedRunningTime="2025-12-04 15:39:53.565988861 +0000 UTC m=+237.528525837" watchObservedRunningTime="2025-12-04 15:39:53.567893598 +0000 UTC m=+237.530430554" Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.726306 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:39:53 crc kubenswrapper[4878]: I1204 15:39:53.726385 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.174651 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.525575 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerStarted","Data":"2d5a264d2030ae6e30afe7cf3986d930a832ffcd6c4ce1303f58e903127c1dce"} Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.531897 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerStarted","Data":"17a1ed76ec8e95f7e646c09042cf69286f3ec099f94206ba4407e0234dbe3950"} Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.548344 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zm6l5" podStartSLOduration=3.226162543 podStartE2EDuration="1m10.548327338s" podCreationTimestamp="2025-12-04 15:38:44 +0000 UTC" firstStartedPulling="2025-12-04 15:38:46.998974732 +0000 UTC m=+170.961511678" lastFinishedPulling="2025-12-04 15:39:54.321139517 +0000 UTC m=+238.283676473" observedRunningTime="2025-12-04 15:39:54.547961629 +0000 UTC m=+238.510498595" watchObservedRunningTime="2025-12-04 15:39:54.548327338 +0000 UTC m=+238.510864294" Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.771098 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.771279 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:39:54 crc kubenswrapper[4878]: I1204 15:39:54.774714 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tljkb" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="registry-server" probeResult="failure" output=< Dec 04 15:39:54 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 15:39:54 crc kubenswrapper[4878]: > Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.177836 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.178202 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.539732 4878 generic.go:334] "Generic (PLEG): container finished" podID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerID="17a1ed76ec8e95f7e646c09042cf69286f3ec099f94206ba4407e0234dbe3950" exitCode=0 Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.539847 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerDied","Data":"17a1ed76ec8e95f7e646c09042cf69286f3ec099f94206ba4407e0234dbe3950"} Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.575474 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.575734 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-znw6m" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="registry-server" containerID="cri-o://4cbee050a65758249604ed549d44e0a60ce17e6c67c0e1efccc79d1a6eafa1eb" gracePeriod=2 Dec 04 15:39:55 crc kubenswrapper[4878]: I1204 15:39:55.822311 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-r8m2c" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="registry-server" probeResult="failure" output=< Dec 04 15:39:55 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 15:39:55 crc kubenswrapper[4878]: > Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.220503 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zm6l5" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="registry-server" probeResult="failure" output=< Dec 04 15:39:56 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 15:39:56 crc kubenswrapper[4878]: > Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617371 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nl9"] Dec 04 15:39:56 crc kubenswrapper[4878]: E1204 15:39:56.617656 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="extract-utilities" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617670 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="extract-utilities" Dec 04 15:39:56 crc kubenswrapper[4878]: E1204 15:39:56.617683 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="extract-content" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617689 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="extract-content" Dec 04 15:39:56 crc kubenswrapper[4878]: E1204 15:39:56.617703 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="659bb95e-9292-4b23-a76e-3cb3b4d24a23" containerName="pruner" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617710 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="659bb95e-9292-4b23-a76e-3cb3b4d24a23" containerName="pruner" Dec 04 15:39:56 crc kubenswrapper[4878]: E1204 15:39:56.617720 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="registry-server" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617726 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="registry-server" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617859 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="659bb95e-9292-4b23-a76e-3cb3b4d24a23" containerName="pruner" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.617896 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b188cb9c-20d6-438e-b53a-a8207d1dedab" containerName="registry-server" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.618377 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.644102 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nl9"] Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759134 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-trusted-ca\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759196 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759226 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-certificates\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759409 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759536 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glmcs\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-kube-api-access-glmcs\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759616 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-tls\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759883 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-bound-sa-token\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.759944 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.789714 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861597 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glmcs\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-kube-api-access-glmcs\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861653 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-tls\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861690 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-bound-sa-token\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861738 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-trusted-ca\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861759 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861779 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-certificates\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.861811 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.863381 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-ca-trust-extracted\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.864258 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-certificates\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.871594 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-registry-tls\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.871595 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-installation-pull-secrets\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.886189 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-bound-sa-token\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:56 crc kubenswrapper[4878]: I1204 15:39:56.898360 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glmcs\" (UniqueName: \"kubernetes.io/projected/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-kube-api-access-glmcs\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:58 crc kubenswrapper[4878]: I1204 15:39:58.395041 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947-trusted-ca\") pod \"image-registry-66df7c8f76-94nl9\" (UID: \"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947\") " pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:58 crc kubenswrapper[4878]: I1204 15:39:58.434194 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:39:59 crc kubenswrapper[4878]: I1204 15:39:59.095516 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-94nl9"] Dec 04 15:39:59 crc kubenswrapper[4878]: I1204 15:39:59.569645 4878 generic.go:334] "Generic (PLEG): container finished" podID="e672441e-4dcc-48e6-8d64-409e4c213051" containerID="4cbee050a65758249604ed549d44e0a60ce17e6c67c0e1efccc79d1a6eafa1eb" exitCode=0 Dec 04 15:39:59 crc kubenswrapper[4878]: I1204 15:39:59.569734 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerDied","Data":"4cbee050a65758249604ed549d44e0a60ce17e6c67c0e1efccc79d1a6eafa1eb"} Dec 04 15:39:59 crc kubenswrapper[4878]: I1204 15:39:59.571618 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" event={"ID":"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947","Type":"ContainerStarted","Data":"a0a136b9fa1ed19ea1b9a41ad3ff5b56583f4a995f0611c09d22a30aef7592d9"} Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.578589 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" event={"ID":"bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947","Type":"ContainerStarted","Data":"c55173af4502e8a736e92d14232543f4678365aeae838af52f1f0cd068bf46f6"} Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.660349 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.826525 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content\") pod \"e672441e-4dcc-48e6-8d64-409e4c213051\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.826650 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sccv\" (UniqueName: \"kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv\") pod \"e672441e-4dcc-48e6-8d64-409e4c213051\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.826698 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities\") pod \"e672441e-4dcc-48e6-8d64-409e4c213051\" (UID: \"e672441e-4dcc-48e6-8d64-409e4c213051\") " Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.827629 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities" (OuterVolumeSpecName: "utilities") pod "e672441e-4dcc-48e6-8d64-409e4c213051" (UID: "e672441e-4dcc-48e6-8d64-409e4c213051"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.836183 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv" (OuterVolumeSpecName: "kube-api-access-4sccv") pod "e672441e-4dcc-48e6-8d64-409e4c213051" (UID: "e672441e-4dcc-48e6-8d64-409e4c213051"). InnerVolumeSpecName "kube-api-access-4sccv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.844390 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e672441e-4dcc-48e6-8d64-409e4c213051" (UID: "e672441e-4dcc-48e6-8d64-409e4c213051"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.928546 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.928683 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sccv\" (UniqueName: \"kubernetes.io/projected/e672441e-4dcc-48e6-8d64-409e4c213051-kube-api-access-4sccv\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:00 crc kubenswrapper[4878]: I1204 15:40:00.928701 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e672441e-4dcc-48e6-8d64-409e4c213051-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.587002 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-znw6m" event={"ID":"e672441e-4dcc-48e6-8d64-409e4c213051","Type":"ContainerDied","Data":"41fb2ded504627709336a27752111d83562121043fff104fe2c1afb9dd12d6f6"} Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.587085 4878 scope.go:117] "RemoveContainer" containerID="4cbee050a65758249604ed549d44e0a60ce17e6c67c0e1efccc79d1a6eafa1eb" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.587119 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-znw6m" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.610780 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.616310 4878 scope.go:117] "RemoveContainer" containerID="07841b0c314ca7f67563ac90702bc72ca15184c97f549b73266ce77f13185824" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.619331 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-znw6m"] Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.635738 4878 scope.go:117] "RemoveContainer" containerID="e72ae4ff882465c2043a5737cbfeaa4d37e69be060751d24c3ac16f346300d29" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.821855 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.858021 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.858127 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.859084 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:40:01 crc kubenswrapper[4878]: I1204 15:40:01.912290 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:02 crc kubenswrapper[4878]: I1204 15:40:02.594400 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:40:02 crc kubenswrapper[4878]: I1204 15:40:02.626837 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" podStartSLOduration=6.62680649 podStartE2EDuration="6.62680649s" podCreationTimestamp="2025-12-04 15:39:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:40:02.622709128 +0000 UTC m=+246.585246094" watchObservedRunningTime="2025-12-04 15:40:02.62680649 +0000 UTC m=+246.589343466" Dec 04 15:40:02 crc kubenswrapper[4878]: I1204 15:40:02.640270 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:03 crc kubenswrapper[4878]: I1204 15:40:03.190037 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" path="/var/lib/kubelet/pods/e672441e-4dcc-48e6-8d64-409e4c213051/volumes" Dec 04 15:40:03 crc kubenswrapper[4878]: I1204 15:40:03.534808 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-g9zqn"] Dec 04 15:40:03 crc kubenswrapper[4878]: I1204 15:40:03.786499 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:40:03 crc kubenswrapper[4878]: I1204 15:40:03.843206 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.353961 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.356807 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.361721 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.362200 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r676v" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="registry-server" containerID="cri-o://7af35cfdbf0cd78332293733b5c5feb30015cb724beec17e3d86667c478632b6" gracePeriod=30 Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.382539 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.382928 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" containerID="cri-o://cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666" gracePeriod=30 Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.405810 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.410781 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.411221 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-r8m2c" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="registry-server" containerID="cri-o://1394b19fe54734102ca7f4b11c5b5bdaeb5635eecc97e17d1b1d717529526a7a" gracePeriod=30 Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.430354 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9rwj"] Dec 04 15:40:04 crc kubenswrapper[4878]: E1204 15:40:04.430772 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="registry-server" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.430809 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="registry-server" Dec 04 15:40:04 crc kubenswrapper[4878]: E1204 15:40:04.430838 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="extract-utilities" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.430846 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="extract-utilities" Dec 04 15:40:04 crc kubenswrapper[4878]: E1204 15:40:04.430864 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="extract-content" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.430884 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="extract-content" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.431031 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e672441e-4dcc-48e6-8d64-409e4c213051" containerName="registry-server" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.431564 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.434083 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9rwj"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.436300 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.436515 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zm6l5" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="registry-server" containerID="cri-o://2d5a264d2030ae6e30afe7cf3986d930a832ffcd6c4ce1303f58e903127c1dce" gracePeriod=30 Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.585042 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.585121 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlnrf\" (UniqueName: \"kubernetes.io/projected/ea958ba8-bb58-498e-8c25-a5b8f413f3be-kube-api-access-rlnrf\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.585169 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.686981 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.687048 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlnrf\" (UniqueName: \"kubernetes.io/projected/ea958ba8-bb58-498e-8c25-a5b8f413f3be-kube-api-access-rlnrf\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.687090 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.688773 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.697342 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ea958ba8-bb58-498e-8c25-a5b8f413f3be-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.707637 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlnrf\" (UniqueName: \"kubernetes.io/projected/ea958ba8-bb58-498e-8c25-a5b8f413f3be-kube-api-access-rlnrf\") pod \"marketplace-operator-79b997595-b9rwj\" (UID: \"ea958ba8-bb58-498e-8c25-a5b8f413f3be\") " pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.751911 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.930843 4878 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-75pd8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" start-of-body= Dec 04 15:40:04 crc kubenswrapper[4878]: I1204 15:40:04.931292 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.38:8080/healthz\": dial tcp 10.217.0.38:8080: connect: connection refused" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.189949 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b9rwj"] Dec 04 15:40:05 crc kubenswrapper[4878]: W1204 15:40:05.190627 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea958ba8_bb58_498e_8c25_a5b8f413f3be.slice/crio-e4986d478776376151387d6920fcfd2eb079f0904b019acdc5adfd4cbf4c9553 WatchSource:0}: Error finding container e4986d478776376151387d6920fcfd2eb079f0904b019acdc5adfd4cbf4c9553: Status 404 returned error can't find the container with id e4986d478776376151387d6920fcfd2eb079f0904b019acdc5adfd4cbf4c9553 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.360059 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.498667 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79vp6\" (UniqueName: \"kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6\") pod \"1437aa02-6698-481c-ab03-8b2c02f64774\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.498746 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics\") pod \"1437aa02-6698-481c-ab03-8b2c02f64774\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.498824 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca\") pod \"1437aa02-6698-481c-ab03-8b2c02f64774\" (UID: \"1437aa02-6698-481c-ab03-8b2c02f64774\") " Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.499734 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "1437aa02-6698-481c-ab03-8b2c02f64774" (UID: "1437aa02-6698-481c-ab03-8b2c02f64774"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.505311 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "1437aa02-6698-481c-ab03-8b2c02f64774" (UID: "1437aa02-6698-481c-ab03-8b2c02f64774"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.505852 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6" (OuterVolumeSpecName: "kube-api-access-79vp6") pod "1437aa02-6698-481c-ab03-8b2c02f64774" (UID: "1437aa02-6698-481c-ab03-8b2c02f64774"). InnerVolumeSpecName "kube-api-access-79vp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.603914 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79vp6\" (UniqueName: \"kubernetes.io/projected/1437aa02-6698-481c-ab03-8b2c02f64774-kube-api-access-79vp6\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.603943 4878 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.603957 4878 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1437aa02-6698-481c-ab03-8b2c02f64774-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.628190 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" event={"ID":"ea958ba8-bb58-498e-8c25-a5b8f413f3be","Type":"ContainerStarted","Data":"e4986d478776376151387d6920fcfd2eb079f0904b019acdc5adfd4cbf4c9553"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.635655 4878 generic.go:334] "Generic (PLEG): container finished" podID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerID="7af35cfdbf0cd78332293733b5c5feb30015cb724beec17e3d86667c478632b6" exitCode=0 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.635740 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerDied","Data":"7af35cfdbf0cd78332293733b5c5feb30015cb724beec17e3d86667c478632b6"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.639805 4878 generic.go:334] "Generic (PLEG): container finished" podID="d556355d-6041-4606-8551-c3642a5a57b4" containerID="1394b19fe54734102ca7f4b11c5b5bdaeb5635eecc97e17d1b1d717529526a7a" exitCode=0 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.639895 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerDied","Data":"1394b19fe54734102ca7f4b11c5b5bdaeb5635eecc97e17d1b1d717529526a7a"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.653047 4878 generic.go:334] "Generic (PLEG): container finished" podID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerID="2d5a264d2030ae6e30afe7cf3986d930a832ffcd6c4ce1303f58e903127c1dce" exitCode=0 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.653131 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerDied","Data":"2d5a264d2030ae6e30afe7cf3986d930a832ffcd6c4ce1303f58e903127c1dce"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.658589 4878 generic.go:334] "Generic (PLEG): container finished" podID="1437aa02-6698-481c-ab03-8b2c02f64774" containerID="cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666" exitCode=0 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.658788 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" event={"ID":"1437aa02-6698-481c-ab03-8b2c02f64774","Type":"ContainerDied","Data":"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.658841 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" event={"ID":"1437aa02-6698-481c-ab03-8b2c02f64774","Type":"ContainerDied","Data":"d70ad031b6aadbf2e50c2f30b46289539187eb306ee5b4a4d11091afe318dcbb"} Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.658927 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tljkb" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="registry-server" containerID="cri-o://93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d" gracePeriod=30 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.658863 4878 scope.go:117] "RemoveContainer" containerID="cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.659107 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-75pd8" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.659147 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6pdg" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="registry-server" containerID="cri-o://f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8" gracePeriod=30 Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.697181 4878 scope.go:117] "RemoveContainer" containerID="cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666" Dec 04 15:40:05 crc kubenswrapper[4878]: E1204 15:40:05.697574 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666\": container with ID starting with cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666 not found: ID does not exist" containerID="cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.697613 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666"} err="failed to get container status \"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666\": rpc error: code = NotFound desc = could not find container \"cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666\": container with ID starting with cb970e1d540f124fbdfa32431173ff4bdc6febd06496b9107724797103afe666 not found: ID does not exist" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.864979 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.871284 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-75pd8"] Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.881762 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.935257 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:40:05 crc kubenswrapper[4878]: I1204 15:40:05.960943 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009471 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmcj9\" (UniqueName: \"kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9\") pod \"d556355d-6041-4606-8551-c3642a5a57b4\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009568 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f75r7\" (UniqueName: \"kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7\") pod \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009662 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content\") pod \"d556355d-6041-4606-8551-c3642a5a57b4\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009705 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content\") pod \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009753 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities\") pod \"d556355d-6041-4606-8551-c3642a5a57b4\" (UID: \"d556355d-6041-4606-8551-c3642a5a57b4\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.009805 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities\") pod \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\" (UID: \"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.011191 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities" (OuterVolumeSpecName: "utilities") pod "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" (UID: "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.015840 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9" (OuterVolumeSpecName: "kube-api-access-qmcj9") pod "d556355d-6041-4606-8551-c3642a5a57b4" (UID: "d556355d-6041-4606-8551-c3642a5a57b4"). InnerVolumeSpecName "kube-api-access-qmcj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.015930 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7" (OuterVolumeSpecName: "kube-api-access-f75r7") pod "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" (UID: "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76"). InnerVolumeSpecName "kube-api-access-f75r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.016005 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities" (OuterVolumeSpecName: "utilities") pod "d556355d-6041-4606-8551-c3642a5a57b4" (UID: "d556355d-6041-4606-8551-c3642a5a57b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.098499 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.114582 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dhhk\" (UniqueName: \"kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk\") pod \"3801d81c-ca75-43a3-a612-71d2d97517a6\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.114788 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content\") pod \"3801d81c-ca75-43a3-a612-71d2d97517a6\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.114825 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities\") pod \"3801d81c-ca75-43a3-a612-71d2d97517a6\" (UID: \"3801d81c-ca75-43a3-a612-71d2d97517a6\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.115574 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities" (OuterVolumeSpecName: "utilities") pod "3801d81c-ca75-43a3-a612-71d2d97517a6" (UID: "3801d81c-ca75-43a3-a612-71d2d97517a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.115864 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.115977 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.115991 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.116002 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmcj9\" (UniqueName: \"kubernetes.io/projected/d556355d-6041-4606-8551-c3642a5a57b4-kube-api-access-qmcj9\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.116014 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f75r7\" (UniqueName: \"kubernetes.io/projected/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-kube-api-access-f75r7\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.119611 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk" (OuterVolumeSpecName: "kube-api-access-5dhhk") pod "3801d81c-ca75-43a3-a612-71d2d97517a6" (UID: "3801d81c-ca75-43a3-a612-71d2d97517a6"). InnerVolumeSpecName "kube-api-access-5dhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.172627 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" (UID: "f8b496cc-fe4a-4308-91fd-0a7f61f1ba76"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.201857 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.218841 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content\") pod \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.218983 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities\") pod \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.219014 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqq59\" (UniqueName: \"kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59\") pod \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\" (UID: \"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.219395 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dhhk\" (UniqueName: \"kubernetes.io/projected/3801d81c-ca75-43a3-a612-71d2d97517a6-kube-api-access-5dhhk\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.219411 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.220030 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities" (OuterVolumeSpecName: "utilities") pod "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" (UID: "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.223834 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59" (OuterVolumeSpecName: "kube-api-access-jqq59") pod "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" (UID: "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728"). InnerVolumeSpecName "kube-api-access-jqq59". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.226579 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d556355d-6041-4606-8551-c3642a5a57b4" (UID: "d556355d-6041-4606-8551-c3642a5a57b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.285833 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" (UID: "cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.293689 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3801d81c-ca75-43a3-a612-71d2d97517a6" (UID: "3801d81c-ca75-43a3-a612-71d2d97517a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320269 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content\") pod \"84a615f4-5e6f-4d2e-86b6-59453037cd11\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320345 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg7z7\" (UniqueName: \"kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7\") pod \"84a615f4-5e6f-4d2e-86b6-59453037cd11\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320387 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities\") pod \"84a615f4-5e6f-4d2e-86b6-59453037cd11\" (UID: \"84a615f4-5e6f-4d2e-86b6-59453037cd11\") " Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320693 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320704 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d556355d-6041-4606-8551-c3642a5a57b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320713 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320724 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqq59\" (UniqueName: \"kubernetes.io/projected/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728-kube-api-access-jqq59\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.320735 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3801d81c-ca75-43a3-a612-71d2d97517a6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.322483 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities" (OuterVolumeSpecName: "utilities") pod "84a615f4-5e6f-4d2e-86b6-59453037cd11" (UID: "84a615f4-5e6f-4d2e-86b6-59453037cd11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.324918 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7" (OuterVolumeSpecName: "kube-api-access-qg7z7") pod "84a615f4-5e6f-4d2e-86b6-59453037cd11" (UID: "84a615f4-5e6f-4d2e-86b6-59453037cd11"). InnerVolumeSpecName "kube-api-access-qg7z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.341864 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84a615f4-5e6f-4d2e-86b6-59453037cd11" (UID: "84a615f4-5e6f-4d2e-86b6-59453037cd11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.422237 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.437960 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg7z7\" (UniqueName: \"kubernetes.io/projected/84a615f4-5e6f-4d2e-86b6-59453037cd11-kube-api-access-qg7z7\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.438034 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84a615f4-5e6f-4d2e-86b6-59453037cd11-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.668051 4878 generic.go:334] "Generic (PLEG): container finished" podID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerID="f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8" exitCode=0 Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.668145 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerDied","Data":"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.668183 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pdg" event={"ID":"cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728","Type":"ContainerDied","Data":"26137f2dc9fc715f1dba103fca3f5e89fb5d4f7c6cb0ff495908dab53639da07"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.668179 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pdg" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.668207 4878 scope.go:117] "RemoveContainer" containerID="f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.671153 4878 generic.go:334] "Generic (PLEG): container finished" podID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerID="93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d" exitCode=0 Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.671219 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerDied","Data":"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.671247 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tljkb" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.671246 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tljkb" event={"ID":"84a615f4-5e6f-4d2e-86b6-59453037cd11","Type":"ContainerDied","Data":"5d8e6c02f67bfe73fb093591f9481b413012f45b3f19c3f26b4c4dae2605492c"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.676704 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" event={"ID":"ea958ba8-bb58-498e-8c25-a5b8f413f3be","Type":"ContainerStarted","Data":"e2bdc063e0cdb8186416c34ef3162710943b9278d4bef783e1f77a54186556d9"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.677489 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.681807 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r676v" event={"ID":"3801d81c-ca75-43a3-a612-71d2d97517a6","Type":"ContainerDied","Data":"5ff743c3cd1e07d2fb2873200b54744ff990c90211cabb08d3af819b835bbf32"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.682011 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r676v" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.686014 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.694380 4878 scope.go:117] "RemoveContainer" containerID="cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.703377 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b9rwj" podStartSLOduration=2.70335659 podStartE2EDuration="2.70335659s" podCreationTimestamp="2025-12-04 15:40:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:40:06.700072358 +0000 UTC m=+250.662609304" watchObservedRunningTime="2025-12-04 15:40:06.70335659 +0000 UTC m=+250.665893546" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.705838 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-r8m2c" event={"ID":"d556355d-6041-4606-8551-c3642a5a57b4","Type":"ContainerDied","Data":"6ef760090ac45187e8c3401feba793fc1184a111447e0b027865c4a857e80083"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.705921 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-r8m2c" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.712200 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zm6l5" event={"ID":"f8b496cc-fe4a-4308-91fd-0a7f61f1ba76","Type":"ContainerDied","Data":"e2249f9ba7ca46a3b459831648cd4ef0c23f0c03d5b199de7ca6add40f05e143"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.712321 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zm6l5" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.716914 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerStarted","Data":"406a8b7eacd05565d9e2749857f62386bb54f6189889ce7c4e1fcb9ca238e172"} Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.717067 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gtmjq" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="registry-server" containerID="cri-o://406a8b7eacd05565d9e2749857f62386bb54f6189889ce7c4e1fcb9ca238e172" gracePeriod=30 Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.727160 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.740024 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tljkb"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.740990 4878 scope.go:117] "RemoveContainer" containerID="1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.777823 4878 scope.go:117] "RemoveContainer" containerID="f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8" Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.779190 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8\": container with ID starting with f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8 not found: ID does not exist" containerID="f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.779332 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8"} err="failed to get container status \"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8\": rpc error: code = NotFound desc = could not find container \"f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8\": container with ID starting with f1998272cfb5b791e8509c8429ee033be784b60ad99fb3b27c4e28b8cac31fb8 not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.779467 4878 scope.go:117] "RemoveContainer" containerID="cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0" Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.781310 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0\": container with ID starting with cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0 not found: ID does not exist" containerID="cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.781427 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0"} err="failed to get container status \"cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0\": rpc error: code = NotFound desc = could not find container \"cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0\": container with ID starting with cbeb4cdb7aa27c5a76e0f72676bbea81f18ede4155551f8a2374ba8503b4b6a0 not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.781534 4878 scope.go:117] "RemoveContainer" containerID="1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.781740 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.783277 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9\": container with ID starting with 1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9 not found: ID does not exist" containerID="1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.783311 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9"} err="failed to get container status \"1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9\": rpc error: code = NotFound desc = could not find container \"1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9\": container with ID starting with 1e68aea61ed256e2bbe66762a7820694ef852f86f9ba2d317db0b5a8864d43a9 not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.783339 4878 scope.go:117] "RemoveContainer" containerID="93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.785255 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zm6l5"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.792970 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gtmjq" podStartSLOduration=4.109226302 podStartE2EDuration="1m25.792950349s" podCreationTimestamp="2025-12-04 15:38:41 +0000 UTC" firstStartedPulling="2025-12-04 15:38:43.908085212 +0000 UTC m=+167.870622168" lastFinishedPulling="2025-12-04 15:40:05.591809259 +0000 UTC m=+249.554346215" observedRunningTime="2025-12-04 15:40:06.792334054 +0000 UTC m=+250.754871030" watchObservedRunningTime="2025-12-04 15:40:06.792950349 +0000 UTC m=+250.755487295" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.811107 4878 scope.go:117] "RemoveContainer" containerID="7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.818147 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.840254 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r676v"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.842933 4878 scope.go:117] "RemoveContainer" containerID="1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.843403 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.845964 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6pdg"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.859539 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.862690 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-r8m2c"] Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.868991 4878 scope.go:117] "RemoveContainer" containerID="93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d" Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.869553 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d\": container with ID starting with 93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d not found: ID does not exist" containerID="93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.869606 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d"} err="failed to get container status \"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d\": rpc error: code = NotFound desc = could not find container \"93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d\": container with ID starting with 93d6b49f4455efd6dba670172bf6c164055360b78eb853403ef55bf418c5c46d not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.869655 4878 scope.go:117] "RemoveContainer" containerID="7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d" Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.870002 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d\": container with ID starting with 7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d not found: ID does not exist" containerID="7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.870035 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d"} err="failed to get container status \"7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d\": rpc error: code = NotFound desc = could not find container \"7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d\": container with ID starting with 7ae2f6fb278a3d91d8cf627e26e8db63a6e189aa22510bc3d45860ffc5f5ca5d not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.870055 4878 scope.go:117] "RemoveContainer" containerID="1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2" Dec 04 15:40:06 crc kubenswrapper[4878]: E1204 15:40:06.870367 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2\": container with ID starting with 1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2 not found: ID does not exist" containerID="1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.870409 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2"} err="failed to get container status \"1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2\": rpc error: code = NotFound desc = could not find container \"1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2\": container with ID starting with 1143dda80f5ae0a82628bef87b6419fb850fb127840eb2e35a670410c325ceb2 not found: ID does not exist" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.870442 4878 scope.go:117] "RemoveContainer" containerID="7af35cfdbf0cd78332293733b5c5feb30015cb724beec17e3d86667c478632b6" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.884066 4878 scope.go:117] "RemoveContainer" containerID="8bd7915e2de32061e59101b696d26cff29be3d28bc29447ab1ba1f82226ba173" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.900488 4878 scope.go:117] "RemoveContainer" containerID="a04de338e83e4992d0011115c2afc21134fc9c45ec9c43180392ec0f428d607d" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.913008 4878 scope.go:117] "RemoveContainer" containerID="1394b19fe54734102ca7f4b11c5b5bdaeb5635eecc97e17d1b1d717529526a7a" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.928357 4878 scope.go:117] "RemoveContainer" containerID="e52ee4edaad15e663d4783e8ab1bd3489fcea0248d556bcf04cecac6a1c40b8b" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.949524 4878 scope.go:117] "RemoveContainer" containerID="0c262e0afc02ad99691808c6e7619c6532cbd733e359c4a0b22ddc1a76cdbdb0" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.966424 4878 scope.go:117] "RemoveContainer" containerID="2d5a264d2030ae6e30afe7cf3986d930a832ffcd6c4ce1303f58e903127c1dce" Dec 04 15:40:06 crc kubenswrapper[4878]: I1204 15:40:06.981174 4878 scope.go:117] "RemoveContainer" containerID="10a9de15cb60a9061813d568a9417ad2c5383d1c5a45ddd476e5e3471c64ebfd" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.000040 4878 scope.go:117] "RemoveContainer" containerID="cced5a45e4ada658b7964e6545dcb53615f98e0b05f99f32d791e403855fcaa4" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.186732 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" path="/var/lib/kubelet/pods/1437aa02-6698-481c-ab03-8b2c02f64774/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.187482 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" path="/var/lib/kubelet/pods/3801d81c-ca75-43a3-a612-71d2d97517a6/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.188076 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" path="/var/lib/kubelet/pods/84a615f4-5e6f-4d2e-86b6-59453037cd11/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.189049 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" path="/var/lib/kubelet/pods/cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.189598 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d556355d-6041-4606-8551-c3642a5a57b4" path="/var/lib/kubelet/pods/d556355d-6041-4606-8551-c3642a5a57b4/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.190614 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" path="/var/lib/kubelet/pods/f8b496cc-fe4a-4308-91fd-0a7f61f1ba76/volumes" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.724201 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtmjq_55ca468a-efe5-4a85-95c0-ee07fc59102f/registry-server/0.log" Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.725054 4878 generic.go:334] "Generic (PLEG): container finished" podID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerID="406a8b7eacd05565d9e2749857f62386bb54f6189889ce7c4e1fcb9ca238e172" exitCode=1 Dec 04 15:40:07 crc kubenswrapper[4878]: I1204 15:40:07.725128 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerDied","Data":"406a8b7eacd05565d9e2749857f62386bb54f6189889ce7c4e1fcb9ca238e172"} Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185182 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185805 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185825 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185842 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185849 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185860 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185896 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185905 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185912 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185920 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185925 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185931 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185936 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185948 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185955 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185967 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185975 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.185990 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.185996 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186004 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186012 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186023 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186031 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186042 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186051 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186064 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186072 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186084 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186091 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186101 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186108 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.186119 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186126 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186257 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3801d81c-ca75-43a3-a612-71d2d97517a6" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186276 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="84a615f4-5e6f-4d2e-86b6-59453037cd11" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186285 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3c0c75-fcf5-4ab1-a561-1fd45cbf8728" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186297 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="d556355d-6041-4606-8551-c3642a5a57b4" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186305 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b496cc-fe4a-4308-91fd-0a7f61f1ba76" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.186315 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="1437aa02-6698-481c-ab03-8b2c02f64774" containerName="marketplace-operator" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.187146 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.189237 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.190101 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.261503 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.261645 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmlvb\" (UniqueName: \"kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.261686 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.334944 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtmjq_55ca468a-efe5-4a85-95c0-ee07fc59102f/registry-server/0.log" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.335839 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.362639 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmlvb\" (UniqueName: \"kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.363086 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.363141 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.363582 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.364010 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.381933 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmlvb\" (UniqueName: \"kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb\") pod \"certified-operators-kr2f4\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.465054 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh7w6\" (UniqueName: \"kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6\") pod \"55ca468a-efe5-4a85-95c0-ee07fc59102f\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.465221 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities\") pod \"55ca468a-efe5-4a85-95c0-ee07fc59102f\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.465247 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content\") pod \"55ca468a-efe5-4a85-95c0-ee07fc59102f\" (UID: \"55ca468a-efe5-4a85-95c0-ee07fc59102f\") " Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.466034 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities" (OuterVolumeSpecName: "utilities") pod "55ca468a-efe5-4a85-95c0-ee07fc59102f" (UID: "55ca468a-efe5-4a85-95c0-ee07fc59102f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.468254 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6" (OuterVolumeSpecName: "kube-api-access-fh7w6") pod "55ca468a-efe5-4a85-95c0-ee07fc59102f" (UID: "55ca468a-efe5-4a85-95c0-ee07fc59102f"). InnerVolumeSpecName "kube-api-access-fh7w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.511524 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55ca468a-efe5-4a85-95c0-ee07fc59102f" (UID: "55ca468a-efe5-4a85-95c0-ee07fc59102f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.511714 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.567176 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh7w6\" (UniqueName: \"kubernetes.io/projected/55ca468a-efe5-4a85-95c0-ee07fc59102f-kube-api-access-fh7w6\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.567219 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.567231 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ca468a-efe5-4a85-95c0-ee07fc59102f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.737081 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gtmjq_55ca468a-efe5-4a85-95c0-ee07fc59102f/registry-server/0.log" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.737973 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gtmjq" event={"ID":"55ca468a-efe5-4a85-95c0-ee07fc59102f","Type":"ContainerDied","Data":"402982e86ff6c8708cd4b318b1454d2e4d6d8fc59798aa32a0a0b73fc6d76ba3"} Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.738015 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gtmjq" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.738062 4878 scope.go:117] "RemoveContainer" containerID="406a8b7eacd05565d9e2749857f62386bb54f6189889ce7c4e1fcb9ca238e172" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.752155 4878 scope.go:117] "RemoveContainer" containerID="17a1ed76ec8e95f7e646c09042cf69286f3ec099f94206ba4407e0234dbe3950" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.779260 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.787998 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gtmjq"] Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.792556 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.792737 4878 scope.go:117] "RemoveContainer" containerID="5b3831ae9c8a3a4a05c4062f6f1746101b35186145818644161c68f072e32c1a" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.793098 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.793117 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="extract-utilities" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.793126 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.793133 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="extract-content" Dec 04 15:40:08 crc kubenswrapper[4878]: E1204 15:40:08.793144 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.802320 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.802648 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" containerName="registry-server" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.803537 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.803431 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.806295 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.871420 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.871551 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rwg\" (UniqueName: \"kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.871582 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.912506 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 15:40:08 crc kubenswrapper[4878]: W1204 15:40:08.914251 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dd6f90d_5a20_4f66_98e1_3e59edb42928.slice/crio-9352001d7e129cd6f14a866dd5620f48429091ae029440e2200e5ff497e2fef6 WatchSource:0}: Error finding container 9352001d7e129cd6f14a866dd5620f48429091ae029440e2200e5ff497e2fef6: Status 404 returned error can't find the container with id 9352001d7e129cd6f14a866dd5620f48429091ae029440e2200e5ff497e2fef6 Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.973798 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.973903 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58rwg\" (UniqueName: \"kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.973927 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.974634 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.974656 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:08 crc kubenswrapper[4878]: I1204 15:40:08.995243 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58rwg\" (UniqueName: \"kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg\") pod \"redhat-operators-7pkwt\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.131862 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.193610 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ca468a-efe5-4a85-95c0-ee07fc59102f" path="/var/lib/kubelet/pods/55ca468a-efe5-4a85-95c0-ee07fc59102f/volumes" Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.525426 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.749747 4878 generic.go:334] "Generic (PLEG): container finished" podID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerID="965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243" exitCode=0 Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.749813 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerDied","Data":"965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243"} Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.750282 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerStarted","Data":"9352001d7e129cd6f14a866dd5620f48429091ae029440e2200e5ff497e2fef6"} Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.755835 4878 generic.go:334] "Generic (PLEG): container finished" podID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerID="b2efefdcc12e9ac5475ff031485463394b657aa54862f7d489169fdfbb1b9bcd" exitCode=0 Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.755902 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerDied","Data":"b2efefdcc12e9ac5475ff031485463394b657aa54862f7d489169fdfbb1b9bcd"} Dec 04 15:40:09 crc kubenswrapper[4878]: I1204 15:40:09.755931 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerStarted","Data":"495330b7f1b83d60e9f9c7fcab4929c3383f2426c8b7822764de8c8744c2dfdd"} Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.577151 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47cwt"] Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.578355 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.582602 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.583105 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47cwt"] Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.703085 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-utilities\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.703141 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-catalog-content\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.703280 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qdk\" (UniqueName: \"kubernetes.io/projected/0b883c34-95fa-4a50-912e-513bf11d581d-kube-api-access-58qdk\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.804038 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qdk\" (UniqueName: \"kubernetes.io/projected/0b883c34-95fa-4a50-912e-513bf11d581d-kube-api-access-58qdk\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.804103 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-utilities\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.804126 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-catalog-content\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.804610 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-catalog-content\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.804731 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b883c34-95fa-4a50-912e-513bf11d581d-utilities\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.827735 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qdk\" (UniqueName: \"kubernetes.io/projected/0b883c34-95fa-4a50-912e-513bf11d581d-kube-api-access-58qdk\") pod \"community-operators-47cwt\" (UID: \"0b883c34-95fa-4a50-912e-513bf11d581d\") " pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:10 crc kubenswrapper[4878]: I1204 15:40:10.902726 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.176446 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8mmv"] Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.181217 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.185749 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.197737 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8mmv"] Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.295751 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47cwt"] Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.322062 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-utilities\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.322436 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfkhv\" (UniqueName: \"kubernetes.io/projected/805ee025-4add-4f95-a2f7-64c73eccd9fa-kube-api-access-wfkhv\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.322515 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-catalog-content\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.423706 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-catalog-content\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.423850 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-utilities\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.423943 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfkhv\" (UniqueName: \"kubernetes.io/projected/805ee025-4add-4f95-a2f7-64c73eccd9fa-kube-api-access-wfkhv\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.424437 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-catalog-content\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.424542 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/805ee025-4add-4f95-a2f7-64c73eccd9fa-utilities\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.445097 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfkhv\" (UniqueName: \"kubernetes.io/projected/805ee025-4add-4f95-a2f7-64c73eccd9fa-kube-api-access-wfkhv\") pod \"redhat-marketplace-w8mmv\" (UID: \"805ee025-4add-4f95-a2f7-64c73eccd9fa\") " pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.501853 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.768955 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerStarted","Data":"36d6e988002c0e46e1e59b95a7f0104314f8bdf018c6c65d96c7ab02a2c81833"} Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.770905 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerDied","Data":"095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543"} Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.770955 4878 generic.go:334] "Generic (PLEG): container finished" podID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerID="095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543" exitCode=0 Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.772618 4878 generic.go:334] "Generic (PLEG): container finished" podID="0b883c34-95fa-4a50-912e-513bf11d581d" containerID="a12b354afef455c386c079b40e2edb32f71905f1bc267870d6cdbd27893eaee1" exitCode=0 Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.772675 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cwt" event={"ID":"0b883c34-95fa-4a50-912e-513bf11d581d","Type":"ContainerDied","Data":"a12b354afef455c386c079b40e2edb32f71905f1bc267870d6cdbd27893eaee1"} Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.772703 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cwt" event={"ID":"0b883c34-95fa-4a50-912e-513bf11d581d","Type":"ContainerStarted","Data":"b5e0b0d8d05741de9486d1ddab7e98ebb5fc14f8d9f26f055a1e4611afd0b0a4"} Dec 04 15:40:11 crc kubenswrapper[4878]: I1204 15:40:11.908630 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8mmv"] Dec 04 15:40:11 crc kubenswrapper[4878]: W1204 15:40:11.918290 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod805ee025_4add_4f95_a2f7_64c73eccd9fa.slice/crio-c068074122b09455953b80f5c507760816ddd20a005d687bdfb261aa05a192e5 WatchSource:0}: Error finding container c068074122b09455953b80f5c507760816ddd20a005d687bdfb261aa05a192e5: Status 404 returned error can't find the container with id c068074122b09455953b80f5c507760816ddd20a005d687bdfb261aa05a192e5 Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.779609 4878 generic.go:334] "Generic (PLEG): container finished" podID="0b883c34-95fa-4a50-912e-513bf11d581d" containerID="87ab8880e559cccc380bcb35782b285729cd7691cda06603eaaa36e3fb6be0de" exitCode=0 Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.779818 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cwt" event={"ID":"0b883c34-95fa-4a50-912e-513bf11d581d","Type":"ContainerDied","Data":"87ab8880e559cccc380bcb35782b285729cd7691cda06603eaaa36e3fb6be0de"} Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.782266 4878 generic.go:334] "Generic (PLEG): container finished" podID="805ee025-4add-4f95-a2f7-64c73eccd9fa" containerID="20bf86d1b6748dece25960fc68f30536fa225da00b2b384020f55fdbefd162dc" exitCode=0 Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.782415 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8mmv" event={"ID":"805ee025-4add-4f95-a2f7-64c73eccd9fa","Type":"ContainerDied","Data":"20bf86d1b6748dece25960fc68f30536fa225da00b2b384020f55fdbefd162dc"} Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.782464 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8mmv" event={"ID":"805ee025-4add-4f95-a2f7-64c73eccd9fa","Type":"ContainerStarted","Data":"c068074122b09455953b80f5c507760816ddd20a005d687bdfb261aa05a192e5"} Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.786221 4878 generic.go:334] "Generic (PLEG): container finished" podID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerID="36d6e988002c0e46e1e59b95a7f0104314f8bdf018c6c65d96c7ab02a2c81833" exitCode=0 Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.786277 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerDied","Data":"36d6e988002c0e46e1e59b95a7f0104314f8bdf018c6c65d96c7ab02a2c81833"} Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.793388 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerStarted","Data":"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e"} Dec 04 15:40:12 crc kubenswrapper[4878]: I1204 15:40:12.858646 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kr2f4" podStartSLOduration=2.464178407 podStartE2EDuration="4.858622051s" podCreationTimestamp="2025-12-04 15:40:08 +0000 UTC" firstStartedPulling="2025-12-04 15:40:09.751369014 +0000 UTC m=+253.713905970" lastFinishedPulling="2025-12-04 15:40:12.145812658 +0000 UTC m=+256.108349614" observedRunningTime="2025-12-04 15:40:12.854930979 +0000 UTC m=+256.817467945" watchObservedRunningTime="2025-12-04 15:40:12.858622051 +0000 UTC m=+256.821159007" Dec 04 15:40:13 crc kubenswrapper[4878]: I1204 15:40:13.801970 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerStarted","Data":"d4e1e66cc94751c9e19616128b1a5eaff1dbf026d807c829448db8961e9faee3"} Dec 04 15:40:13 crc kubenswrapper[4878]: I1204 15:40:13.807676 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47cwt" event={"ID":"0b883c34-95fa-4a50-912e-513bf11d581d","Type":"ContainerStarted","Data":"e646f3a8d0249bd7c928048fca3905991a0ce247373ecc0645eb91aa1877b2d9"} Dec 04 15:40:13 crc kubenswrapper[4878]: I1204 15:40:13.811808 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8mmv" event={"ID":"805ee025-4add-4f95-a2f7-64c73eccd9fa","Type":"ContainerStarted","Data":"a4685c0a1058e03defde6b27a7c0516015e1cfc102591770e0955b2fbc8dbb62"} Dec 04 15:40:13 crc kubenswrapper[4878]: I1204 15:40:13.821913 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7pkwt" podStartSLOduration=2.069771025 podStartE2EDuration="5.821895494s" podCreationTimestamp="2025-12-04 15:40:08 +0000 UTC" firstStartedPulling="2025-12-04 15:40:09.757698391 +0000 UTC m=+253.720235347" lastFinishedPulling="2025-12-04 15:40:13.50982286 +0000 UTC m=+257.472359816" observedRunningTime="2025-12-04 15:40:13.820819987 +0000 UTC m=+257.783356943" watchObservedRunningTime="2025-12-04 15:40:13.821895494 +0000 UTC m=+257.784432450" Dec 04 15:40:13 crc kubenswrapper[4878]: I1204 15:40:13.842561 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47cwt" podStartSLOduration=2.263155237 podStartE2EDuration="3.842538417s" podCreationTimestamp="2025-12-04 15:40:10 +0000 UTC" firstStartedPulling="2025-12-04 15:40:11.774020079 +0000 UTC m=+255.736557025" lastFinishedPulling="2025-12-04 15:40:13.353403249 +0000 UTC m=+257.315940205" observedRunningTime="2025-12-04 15:40:13.842003894 +0000 UTC m=+257.804540860" watchObservedRunningTime="2025-12-04 15:40:13.842538417 +0000 UTC m=+257.805075373" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.819243 4878 generic.go:334] "Generic (PLEG): container finished" podID="805ee025-4add-4f95-a2f7-64c73eccd9fa" containerID="a4685c0a1058e03defde6b27a7c0516015e1cfc102591770e0955b2fbc8dbb62" exitCode=0 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.819400 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8mmv" event={"ID":"805ee025-4add-4f95-a2f7-64c73eccd9fa","Type":"ContainerDied","Data":"a4685c0a1058e03defde6b27a7c0516015e1cfc102591770e0955b2fbc8dbb62"} Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.887839 4878 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889183 4878 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889480 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889673 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0" gracePeriod=15 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889935 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515" gracePeriod=15 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889923 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd" gracePeriod=15 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.890003 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b" gracePeriod=15 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.889970 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689" gracePeriod=15 Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.890770 4878 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891181 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891204 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891216 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891224 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891252 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891260 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891269 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891276 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891286 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891294 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 15:40:14 crc kubenswrapper[4878]: E1204 15:40:14.891307 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891315 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891455 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891472 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891483 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891494 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.891503 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.903540 4878 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]log ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]api-openshift-apiserver-available ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]api-openshift-oauth-apiserver-available ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]informer-sync ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/generic-apiserver-start-informers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/priority-and-fairness-filter ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-apiextensions-informers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-apiextensions-controllers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/crd-informer-synced ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-system-namespaces-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/rbac/bootstrap-roles ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/bootstrap-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/start-kube-aggregator-informers ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-registration-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-discovery-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]autoregister-completion ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-openapi-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 04 15:40:14 crc kubenswrapper[4878]: [-]shutdown failed: reason withheld Dec 04 15:40:14 crc kubenswrapper[4878]: readyz check failed Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.903657 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979784 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979838 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979897 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979918 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979940 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979955 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.979984 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:14 crc kubenswrapper[4878]: I1204 15:40:14.980043 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.083612 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.083676 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.084085 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.084102 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.084226 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.083701 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.084916 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085014 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.084952 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085084 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085143 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085244 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085286 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085306 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085354 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.085446 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:15 crc kubenswrapper[4878]: E1204 15:40:15.240240 4878 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w8mmv.187e0d613ba2c92a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w8mmv,UID:805ee025-4add-4f95-a2f7-64c73eccd9fa,APIVersion:v1,ResourceVersion:29469,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 417ms (417ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,LastTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.826709 4878 generic.go:334] "Generic (PLEG): container finished" podID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" containerID="e185dde9a772afb28a6549a87e4e04e63c2a1ade00877f8ff553a2c22898f342" exitCode=0 Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.826799 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70ac35b6-291f-4c1e-af49-7cb620da5ca1","Type":"ContainerDied","Data":"e185dde9a772afb28a6549a87e4e04e63c2a1ade00877f8ff553a2c22898f342"} Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.827717 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.830085 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.830705 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515" exitCode=0 Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.830728 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b" exitCode=0 Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.830736 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd" exitCode=0 Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.830742 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689" exitCode=2 Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.833217 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8mmv" event={"ID":"805ee025-4add-4f95-a2f7-64c73eccd9fa","Type":"ContainerStarted","Data":"7fb28919efde7c4f12725657f8752ea065e052416c71d04f13680dcb82348264"} Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.834423 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:15 crc kubenswrapper[4878]: I1204 15:40:15.834938 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.183119 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.188359 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.214898 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.215565 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.216266 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.330524 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir\") pod \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.330580 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "70ac35b6-291f-4c1e-af49-7cb620da5ca1" (UID: "70ac35b6-291f-4c1e-af49-7cb620da5ca1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.330940 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock\") pod \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.330964 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access\") pod \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\" (UID: \"70ac35b6-291f-4c1e-af49-7cb620da5ca1\") " Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.331015 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock" (OuterVolumeSpecName: "var-lock") pod "70ac35b6-291f-4c1e-af49-7cb620da5ca1" (UID: "70ac35b6-291f-4c1e-af49-7cb620da5ca1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.331525 4878 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.331566 4878 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/70ac35b6-291f-4c1e-af49-7cb620da5ca1-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.336364 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "70ac35b6-291f-4c1e-af49-7cb620da5ca1" (UID: "70ac35b6-291f-4c1e-af49-7cb620da5ca1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.433422 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/70ac35b6-291f-4c1e-af49-7cb620da5ca1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:17 crc kubenswrapper[4878]: E1204 15:40:17.462390 4878 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w8mmv.187e0d613ba2c92a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w8mmv,UID:805ee025-4add-4f95-a2f7-64c73eccd9fa,APIVersion:v1,ResourceVersion:29469,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 417ms (417ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,LastTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.847489 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"70ac35b6-291f-4c1e-af49-7cb620da5ca1","Type":"ContainerDied","Data":"2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce"} Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.847541 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad98f9961664531eea70fc460f4966a18952986c58afbe65c94a3c98728efce" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.847630 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.861315 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:17 crc kubenswrapper[4878]: I1204 15:40:17.861537 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.439743 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.440497 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.440757 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.441041 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: E1204 15:40:18.473433 4878 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" volumeName="registry-storage" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.512743 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.513181 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.563310 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.565271 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.565642 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.565991 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.566277 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.861710 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.862467 4878 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0" exitCode=0 Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.903807 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.904555 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.905030 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.905608 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:18 crc kubenswrapper[4878]: I1204 15:40:18.906014 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.132989 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.133087 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.172396 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.173483 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.174021 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.174395 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.174795 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.175226 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.637543 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.638371 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.639054 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.639405 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.639832 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.640110 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.640413 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.640705 4878 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765515 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765643 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765654 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765718 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765811 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.765902 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.766296 4878 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.766315 4878 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.766326 4878 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.870124 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.871104 4878 scope.go:117] "RemoveContainer" containerID="d8c62791c801801e51784e6e3c8cd588f29375432300aacf10ae3289c807e515" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.871187 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.888556 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.888723 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.888863 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.889016 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.889149 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.889299 4878 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.897949 4878 scope.go:117] "RemoveContainer" containerID="2fa96666d75a78c7ab71c447f1d3d0c80f358e3d8ee2a26fac205adc731d9e5b" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.911138 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.911657 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.912244 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.912520 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.912592 4878 scope.go:117] "RemoveContainer" containerID="64e7b6fe9ba46e51a4946d1001b2219d32112e312a78de26846baafd84b93edd" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.912761 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.914111 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.914603 4878 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.928616 4878 scope.go:117] "RemoveContainer" containerID="eaa3723eda12cabeb0ba587cf7064cf6eb34f2ced2010636f6544683cac94689" Dec 04 15:40:19 crc kubenswrapper[4878]: E1204 15:40:19.933103 4878 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.933751 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.942288 4878 scope.go:117] "RemoveContainer" containerID="4a70f21d3609d2bab93daebe6e02904135509fa04f3af189d37ffbfd488cbdd0" Dec 04 15:40:19 crc kubenswrapper[4878]: I1204 15:40:19.959537 4878 scope.go:117] "RemoveContainer" containerID="9f4c171402b82a3f36ace43577ad581db91c04b423125916a476803063f0d59e" Dec 04 15:40:19 crc kubenswrapper[4878]: W1204 15:40:19.970575 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c6537a3b83b1ba2377948e7667457b8c392620dcb287fbdc19dfb57c8b098869 WatchSource:0}: Error finding container c6537a3b83b1ba2377948e7667457b8c392620dcb287fbdc19dfb57c8b098869: Status 404 returned error can't find the container with id c6537a3b83b1ba2377948e7667457b8c392620dcb287fbdc19dfb57c8b098869 Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.879979 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"43ff6527bb0f033273040738ea7a10e5dc55c08a56aba14a73c2a9c7a02ac554"} Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.880314 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c6537a3b83b1ba2377948e7667457b8c392620dcb287fbdc19dfb57c8b098869"} Dec 04 15:40:20 crc kubenswrapper[4878]: E1204 15:40:20.881021 4878 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.881455 4878 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.881692 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.881985 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.882956 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.883376 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.883603 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.903097 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.903625 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.950564 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.951351 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.951770 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.952146 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.952435 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.952744 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.952985 4878 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:20 crc kubenswrapper[4878]: I1204 15:40:20.953424 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.185733 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.503628 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.503861 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.539106 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.539834 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.540579 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.541114 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.541440 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.541787 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.542197 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.935143 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8mmv" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.935766 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.935892 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47cwt" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.936232 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.936594 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.937060 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.937357 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.937667 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.938101 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.938447 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.938751 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.939103 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.939411 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:21 crc kubenswrapper[4878]: I1204 15:40:21.939727 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.153278 4878 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.154470 4878 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.155143 4878 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.155455 4878 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.156072 4878 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:23 crc kubenswrapper[4878]: I1204 15:40:23.156160 4878 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.156811 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.358179 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Dec 04 15:40:23 crc kubenswrapper[4878]: E1204 15:40:23.758934 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Dec 04 15:40:24 crc kubenswrapper[4878]: E1204 15:40:24.560320 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Dec 04 15:40:25 crc kubenswrapper[4878]: I1204 15:40:25.247515 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:40:25 crc kubenswrapper[4878]: I1204 15:40:25.248057 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:40:25 crc kubenswrapper[4878]: W1204 15:40:25.248320 4878 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27180": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:40:25 crc kubenswrapper[4878]: E1204 15:40:25.248394 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27180\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:40:25 crc kubenswrapper[4878]: W1204 15:40:25.248510 4878 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27173": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:40:25 crc kubenswrapper[4878]: E1204 15:40:25.248570 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27173\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.160967 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.220484 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.248337 4878 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.248417 4878 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.248464 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:42:28.248435317 +0000 UTC m=+392.210972273 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.248561 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-04 15:42:28.248503529 +0000 UTC m=+392.211040555 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Dec 04 15:40:26 crc kubenswrapper[4878]: W1204 15:40:26.884513 4878 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27173": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:40:26 crc kubenswrapper[4878]: E1204 15:40:26.884635 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27173\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.182306 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.182819 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.183037 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.183191 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.183484 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: I1204 15:40:27.183895 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:27 crc kubenswrapper[4878]: E1204 15:40:27.464027 4878 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{redhat-marketplace-w8mmv.187e0d613ba2c92a openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-w8mmv,UID:805ee025-4add-4f95-a2f7-64c73eccd9fa,APIVersion:v1,ResourceVersion:29469,FieldPath:spec.containers{registry-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\" in 417ms (417ms including waiting). Image size: 907837715 bytes.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,LastTimestamp:2025-12-04 15:40:15.239325994 +0000 UTC m=+259.201862950,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 04 15:40:28 crc kubenswrapper[4878]: W1204 15:40:28.171600 4878 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27180": dial tcp 38.102.83.98:6443: connect: connection refused Dec 04 15:40:28 crc kubenswrapper[4878]: E1204 15:40:28.171716 4878 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27180\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.565891 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerName="oauth-openshift" containerID="cri-o://e838de45a687a1c64fa153c01c85c9bb9c1185c3459e9176f708886281923aa0" gracePeriod=15 Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.934706 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.935105 4878 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376" exitCode=1 Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.935170 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376"} Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.935731 4878 scope.go:117] "RemoveContainer" containerID="68c2e47db3b6a7474d72344f19510cdabcfcb1e674d8c2101514cc04b132c376" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.936854 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.938226 4878 generic.go:334] "Generic (PLEG): container finished" podID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerID="e838de45a687a1c64fa153c01c85c9bb9c1185c3459e9176f708886281923aa0" exitCode=0 Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.938276 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" event={"ID":"0c31dded-d5e0-4f14-8de8-c4cf3ec56236","Type":"ContainerDied","Data":"e838de45a687a1c64fa153c01c85c9bb9c1185c3459e9176f708886281923aa0"} Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.938329 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" event={"ID":"0c31dded-d5e0-4f14-8de8-c4cf3ec56236","Type":"ContainerDied","Data":"3110ee6fdfb0b28543894e7db5e43486dc078d3b11768d2a83c9cac4ce19db2d"} Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.938352 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3110ee6fdfb0b28543894e7db5e43486dc078d3b11768d2a83c9cac4ce19db2d" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.938552 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.939020 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.939233 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.939495 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.939848 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.940097 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.955756 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.956561 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.956831 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.957282 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.957706 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.958036 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.958353 4878 status_manager.go:851] "Failed to get status for pod" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g9zqn\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.958626 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:28 crc kubenswrapper[4878]: I1204 15:40:28.958938 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101646 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101706 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101747 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101781 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101804 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101842 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101889 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kchrd\" (UniqueName: \"kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101912 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101936 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.101991 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.102042 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.102085 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.102124 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.102152 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir\") pod \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\" (UID: \"0c31dded-d5e0-4f14-8de8-c4cf3ec56236\") " Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.102434 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.103020 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.103184 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.103187 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.104791 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.109453 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd" (OuterVolumeSpecName: "kube-api-access-kchrd") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "kube-api-access-kchrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.109481 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.109731 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.110713 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.113767 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.118193 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.121086 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.121403 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.121432 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0c31dded-d5e0-4f14-8de8-c4cf3ec56236" (UID: "0c31dded-d5e0-4f14-8de8-c4cf3ec56236"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.179713 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.181544 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.182140 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.182389 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.182663 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.182924 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.183172 4878 status_manager.go:851] "Failed to get status for pod" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g9zqn\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.183406 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.183704 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.192827 4878 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.194061 4878 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:29 crc kubenswrapper[4878]: E1204 15:40:29.198234 4878 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.199033 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204229 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204264 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204276 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204287 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204299 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204310 4878 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204325 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kchrd\" (UniqueName: \"kubernetes.io/projected/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-kube-api-access-kchrd\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204334 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204342 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204352 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204361 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204371 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204382 4878 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.204390 4878 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0c31dded-d5e0-4f14-8de8-c4cf3ec56236-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:40:29 crc kubenswrapper[4878]: E1204 15:40:29.361805 4878 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.946835 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.946970 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4b96be12ab7fc7a793df23a97db674bc7559c1abe3bcb50da95c52b8695bb843"} Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.948440 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.948999 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.949432 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.949803 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.950180 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.950473 4878 status_manager.go:851] "Failed to get status for pod" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g9zqn\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.950907 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.951461 4878 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="64a603073816afeb52e3931de8388de76072b8c638a25335fe0c7b7fb5ffef94" exitCode=0 Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.951466 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.951553 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"64a603073816afeb52e3931de8388de76072b8c638a25335fe0c7b7fb5ffef94"} Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.951635 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"836a54c0807208ca11bbffaf64670d4d5a775f77165ba8a70c4796aa7f687313"} Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.951591 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.952116 4878 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.952137 4878 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:29 crc kubenswrapper[4878]: E1204 15:40:29.952700 4878 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.953507 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.954088 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.954449 4878 status_manager.go:851] "Failed to get status for pod" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g9zqn\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.954721 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.954972 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.955148 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.955498 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.956445 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.956822 4878 status_manager.go:851] "Failed to get status for pod" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" pod="openshift-authentication/oauth-openshift-558db77b4-g9zqn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-g9zqn\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.957075 4878 status_manager.go:851] "Failed to get status for pod" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.957396 4878 status_manager.go:851] "Failed to get status for pod" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" pod="openshift-marketplace/certified-operators-kr2f4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-kr2f4\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.957793 4878 status_manager.go:851] "Failed to get status for pod" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" pod="openshift-marketplace/redhat-operators-7pkwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-7pkwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.958420 4878 status_manager.go:851] "Failed to get status for pod" podUID="805ee025-4add-4f95-a2f7-64c73eccd9fa" pod="openshift-marketplace/redhat-marketplace-w8mmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-w8mmv\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.958693 4878 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.959012 4878 status_manager.go:851] "Failed to get status for pod" podUID="0b883c34-95fa-4a50-912e-513bf11d581d" pod="openshift-marketplace/community-operators-47cwt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-47cwt\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:29 crc kubenswrapper[4878]: I1204 15:40:29.959308 4878 status_manager.go:851] "Failed to get status for pod" podUID="bdbaf836-ebaf-48ea-91c5-ac9e5f2a2947" pod="openshift-image-registry/image-registry-66df7c8f76-94nl9" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-94nl9\": dial tcp 38.102.83.98:6443: connect: connection refused" Dec 04 15:40:30 crc kubenswrapper[4878]: E1204 15:40:30.243563 4878 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" volumeName="registry-storage" Dec 04 15:40:30 crc kubenswrapper[4878]: I1204 15:40:30.965055 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dc5e0d26bc388f30ebc49db80aad0d1ced2480d309aa31cb8ddf01144be49563"} Dec 04 15:40:30 crc kubenswrapper[4878]: I1204 15:40:30.965118 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f373841d83019e78b10863dd4c66f04394b53d6337ddae068a45dc6c1e966791"} Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.596295 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.596454 4878 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.597436 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976235 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"210c17f871279a285e5d6b6f61ed9367d13160ed52ee41aee7264ad491a8713c"} Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976297 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d7fbbe9aef232b26fd5b69e9540866f84f1762088f1355542badcc76e76af23c"} Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976320 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"86b9f1f09ce029ba88fad849106b5faaeacce6f9bf9506f241994b850832eda5"} Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976435 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976652 4878 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:31 crc kubenswrapper[4878]: I1204 15:40:31.976685 4878 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:34 crc kubenswrapper[4878]: I1204 15:40:34.199598 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:34 crc kubenswrapper[4878]: I1204 15:40:34.200111 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:34 crc kubenswrapper[4878]: I1204 15:40:34.205461 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:34 crc kubenswrapper[4878]: I1204 15:40:34.407557 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:40:36 crc kubenswrapper[4878]: I1204 15:40:36.689742 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 04 15:40:36 crc kubenswrapper[4878]: I1204 15:40:36.987304 4878 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:37 crc kubenswrapper[4878]: I1204 15:40:37.009728 4878 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:37 crc kubenswrapper[4878]: I1204 15:40:37.009766 4878 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:37 crc kubenswrapper[4878]: I1204 15:40:37.013718 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:37 crc kubenswrapper[4878]: I1204 15:40:37.201071 4878 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4c21c56f-dc97-4f20-84c0-034bb5b57f6a" Dec 04 15:40:37 crc kubenswrapper[4878]: I1204 15:40:37.240053 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 04 15:40:38 crc kubenswrapper[4878]: I1204 15:40:38.014830 4878 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:38 crc kubenswrapper[4878]: I1204 15:40:38.014891 4878 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3829285d-c049-4d27-b390-5d88c407bd0f" Dec 04 15:40:38 crc kubenswrapper[4878]: I1204 15:40:38.017738 4878 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4c21c56f-dc97-4f20-84c0-034bb5b57f6a" Dec 04 15:40:41 crc kubenswrapper[4878]: I1204 15:40:41.178978 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:40:41 crc kubenswrapper[4878]: I1204 15:40:41.600771 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:40:41 crc kubenswrapper[4878]: I1204 15:40:41.605407 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 04 15:40:46 crc kubenswrapper[4878]: I1204 15:40:46.891794 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 04 15:40:46 crc kubenswrapper[4878]: I1204 15:40:46.946292 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 04 15:40:47 crc kubenswrapper[4878]: I1204 15:40:47.025489 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 04 15:40:47 crc kubenswrapper[4878]: I1204 15:40:47.494565 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 04 15:40:47 crc kubenswrapper[4878]: I1204 15:40:47.532350 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 04 15:40:47 crc kubenswrapper[4878]: I1204 15:40:47.639133 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 04 15:40:47 crc kubenswrapper[4878]: I1204 15:40:47.839992 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.048944 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.360105 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.452459 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.525794 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.602007 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.785275 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.954630 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 04 15:40:48 crc kubenswrapper[4878]: I1204 15:40:48.979989 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.038273 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.078154 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.082599 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.117906 4878 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.118539 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8mmv" podStartSLOduration=35.664700031 podStartE2EDuration="38.118520903s" podCreationTimestamp="2025-12-04 15:40:11 +0000 UTC" firstStartedPulling="2025-12-04 15:40:12.785483601 +0000 UTC m=+256.748020557" lastFinishedPulling="2025-12-04 15:40:15.239304473 +0000 UTC m=+259.201841429" observedRunningTime="2025-12-04 15:40:37.028441778 +0000 UTC m=+280.990978734" watchObservedRunningTime="2025-12-04 15:40:49.118520903 +0000 UTC m=+293.081057869" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.123570 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-g9zqn"] Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.123667 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.127347 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.130588 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.150470 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=13.150443395 podStartE2EDuration="13.150443395s" podCreationTimestamp="2025-12-04 15:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:40:49.149930072 +0000 UTC m=+293.112467028" watchObservedRunningTime="2025-12-04 15:40:49.150443395 +0000 UTC m=+293.112980351" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.186853 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" path="/var/lib/kubelet/pods/0c31dded-d5e0-4f14-8de8-c4cf3ec56236/volumes" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.300938 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.430019 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.523720 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.642902 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.725744 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.741942 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 04 15:40:49 crc kubenswrapper[4878]: I1204 15:40:49.840946 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.016341 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.031280 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.165935 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.310178 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.338086 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.385144 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.508219 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.609235 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.635100 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.665945 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.729419 4878 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.773656 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.792137 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 04 15:40:50 crc kubenswrapper[4878]: I1204 15:40:50.817320 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.110497 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.220071 4878 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.269623 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.338317 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.350599 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.425945 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.447817 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.450237 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.451203 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.527119 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.537931 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.616554 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.669413 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.755620 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.796312 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.826239 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.909979 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 15:40:51 crc kubenswrapper[4878]: I1204 15:40:51.974225 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.007344 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.028864 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.172784 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.211160 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.212916 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.337141 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.359950 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.370030 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.400977 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.464426 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.466757 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.543472 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.581790 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.727496 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 04 15:40:52 crc kubenswrapper[4878]: I1204 15:40:52.755428 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.014617 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.035108 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.056215 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.135717 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.135997 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.184118 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.342118 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.435215 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.438021 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.507631 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.716214 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.807322 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.886200 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.886264 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.904295 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 04 15:40:53 crc kubenswrapper[4878]: I1204 15:40:53.948283 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.053441 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.068942 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.095079 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.176932 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.184098 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.240607 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.364465 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.386042 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.423538 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.523314 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.764712 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.782434 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.855984 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 04 15:40:54 crc kubenswrapper[4878]: I1204 15:40:54.904188 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.137938 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.154046 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.155064 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.170600 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.226284 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.261622 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.284297 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.395801 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.414674 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.480389 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.637215 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.699729 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.767249 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.785991 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.806633 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 04 15:40:55 crc kubenswrapper[4878]: I1204 15:40:55.838059 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.176043 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.193733 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.253863 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.271372 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.278535 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.309202 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.370563 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.386798 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.394355 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.434439 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.494469 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.567173 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.599052 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.642328 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.648303 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.673733 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.746836 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.751570 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.807444 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.810577 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.931262 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.952203 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.976365 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.979217 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 04 15:40:56 crc kubenswrapper[4878]: I1204 15:40:56.996013 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.039548 4878 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.062260 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.064746 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.265479 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.265849 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.324727 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.327494 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.436024 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.464433 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.561831 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.565443 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.623621 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.674157 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.680129 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.693626 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.715450 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.730664 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.775000 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.825914 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.870681 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.956521 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.961621 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.966263 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 15:40:57 crc kubenswrapper[4878]: I1204 15:40:57.999504 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.006408 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.163382 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.170944 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.198932 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.229402 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.330071 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.450052 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.612580 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.628388 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.662487 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.744433 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.811707 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.868919 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.894288 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.971519 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 04 15:40:58 crc kubenswrapper[4878]: I1204 15:40:58.979317 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.007530 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.249071 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.310729 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.474170 4878 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.474493 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://43ff6527bb0f033273040738ea7a10e5dc55c08a56aba14a73c2a9c7a02ac554" gracePeriod=5 Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.581378 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.658793 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.679272 4878 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.712410 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.735075 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 04 15:40:59 crc kubenswrapper[4878]: I1204 15:40:59.919113 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.022481 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.035537 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.105248 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.153424 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.158507 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.175544 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.333634 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.524034 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.524819 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.559317 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.592666 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.640758 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.690260 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.830328 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.887280 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.940956 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 04 15:41:00 crc kubenswrapper[4878]: I1204 15:41:00.995510 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.181934 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.239282 4878 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.252105 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.348492 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.375317 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.424612 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.444783 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.481270 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.685575 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.826775 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.845448 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.922948 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 04 15:41:01 crc kubenswrapper[4878]: I1204 15:41:01.977170 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 04 15:41:02 crc kubenswrapper[4878]: I1204 15:41:02.084680 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 04 15:41:02 crc kubenswrapper[4878]: I1204 15:41:02.162585 4878 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 04 15:41:02 crc kubenswrapper[4878]: I1204 15:41:02.222014 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 04 15:41:02 crc kubenswrapper[4878]: I1204 15:41:02.837360 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 04 15:41:02 crc kubenswrapper[4878]: I1204 15:41:02.840032 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 04 15:41:03 crc kubenswrapper[4878]: I1204 15:41:03.030099 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 04 15:41:03 crc kubenswrapper[4878]: I1204 15:41:03.106925 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 04 15:41:03 crc kubenswrapper[4878]: I1204 15:41:03.127416 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 04 15:41:03 crc kubenswrapper[4878]: I1204 15:41:03.135850 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522221 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-cb2bf"] Dec 04 15:41:04 crc kubenswrapper[4878]: E1204 15:41:04.522477 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522492 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 15:41:04 crc kubenswrapper[4878]: E1204 15:41:04.522500 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" containerName="installer" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522506 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" containerName="installer" Dec 04 15:41:04 crc kubenswrapper[4878]: E1204 15:41:04.522525 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerName="oauth-openshift" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522533 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerName="oauth-openshift" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522636 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522649 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="70ac35b6-291f-4c1e-af49-7cb620da5ca1" containerName="installer" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.522660 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c31dded-d5e0-4f14-8de8-c4cf3ec56236" containerName="oauth-openshift" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.523150 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.526297 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.526600 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.526745 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.527561 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.527855 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.535402 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.535832 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.536038 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.536456 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.536486 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.537040 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.537269 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.548946 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.549700 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.551285 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.557423 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-cb2bf"] Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.638713 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.638995 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639016 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639087 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639133 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639161 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639185 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639202 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kk9w\" (UniqueName: \"kubernetes.io/projected/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-kube-api-access-7kk9w\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639220 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639288 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-dir\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639313 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639332 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639347 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.639435 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-policies\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741000 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-policies\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741079 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741119 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741146 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741197 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741224 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741252 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741284 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741316 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kk9w\" (UniqueName: \"kubernetes.io/projected/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-kube-api-access-7kk9w\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741341 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741396 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-dir\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741421 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741447 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741475 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.741909 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-policies\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.742151 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-audit-dir\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.742356 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.743139 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.743276 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.747750 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.747788 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.748030 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.748357 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.748656 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-error\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.748705 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.749007 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-user-template-login\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.752405 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-v4-0-config-system-session\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.761937 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kk9w\" (UniqueName: \"kubernetes.io/projected/55c726d6-57d9-4c43-a6f1-bdd3ab3fae00-kube-api-access-7kk9w\") pod \"oauth-openshift-7b554fccb6-cb2bf\" (UID: \"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00\") " pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.804857 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 04 15:41:04 crc kubenswrapper[4878]: I1204 15:41:04.860142 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:05 crc kubenswrapper[4878]: I1204 15:41:05.063584 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b554fccb6-cb2bf"] Dec 04 15:41:05 crc kubenswrapper[4878]: I1204 15:41:05.175837 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" event={"ID":"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00","Type":"ContainerStarted","Data":"ad78ed54542518b0447866123b4f89d387fdb4a62de31efbcc75ec68dde28ee1"} Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.194334 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" event={"ID":"55c726d6-57d9-4c43-a6f1-bdd3ab3fae00","Type":"ContainerStarted","Data":"8ab8ea4352d14294e982470c8177759ad2f28800e9d3ecfead2270f6c4a4b191"} Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.195000 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.196161 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.196213 4878 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="43ff6527bb0f033273040738ea7a10e5dc55c08a56aba14a73c2a9c7a02ac554" exitCode=137 Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.202750 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.237620 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b554fccb6-cb2bf" podStartSLOduration=65.237594568 podStartE2EDuration="1m5.237594568s" podCreationTimestamp="2025-12-04 15:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:41:08.21815359 +0000 UTC m=+312.180690546" watchObservedRunningTime="2025-12-04 15:41:08.237594568 +0000 UTC m=+312.200131524" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.661419 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.661516 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.810891 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.810970 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811031 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811051 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811147 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811344 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811377 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811404 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.811426 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.819626 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.912582 4878 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.912628 4878 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.912640 4878 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.912649 4878 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:08 crc kubenswrapper[4878]: I1204 15:41:08.912663 4878 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:09 crc kubenswrapper[4878]: I1204 15:41:09.187773 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 04 15:41:09 crc kubenswrapper[4878]: I1204 15:41:09.204372 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 04 15:41:09 crc kubenswrapper[4878]: I1204 15:41:09.204518 4878 scope.go:117] "RemoveContainer" containerID="43ff6527bb0f033273040738ea7a10e5dc55c08a56aba14a73c2a9c7a02ac554" Dec 04 15:41:09 crc kubenswrapper[4878]: I1204 15:41:09.204543 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 04 15:41:14 crc kubenswrapper[4878]: I1204 15:41:14.316550 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.010365 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" podUID="26a0fa4d-3430-4477-beae-2b0fa9819756" containerName="registry" containerID="cri-o://6293ed127c808400dfe5a23c6d843a64d79e4e46a728f5aee4eb1a6dc330f4a7" gracePeriod=30 Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.260657 4878 generic.go:334] "Generic (PLEG): container finished" podID="26a0fa4d-3430-4477-beae-2b0fa9819756" containerID="6293ed127c808400dfe5a23c6d843a64d79e4e46a728f5aee4eb1a6dc330f4a7" exitCode=0 Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.260728 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" event={"ID":"26a0fa4d-3430-4477-beae-2b0fa9819756","Type":"ContainerDied","Data":"6293ed127c808400dfe5a23c6d843a64d79e4e46a728f5aee4eb1a6dc330f4a7"} Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.415412 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534252 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534318 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534354 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534397 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534568 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534643 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534685 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.534727 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvvh9\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9\") pod \"26a0fa4d-3430-4477-beae-2b0fa9819756\" (UID: \"26a0fa4d-3430-4477-beae-2b0fa9819756\") " Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.535386 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.536087 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.536406 4878 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.536432 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26a0fa4d-3430-4477-beae-2b0fa9819756-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.541187 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.542400 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.542578 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9" (OuterVolumeSpecName: "kube-api-access-zvvh9") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "kube-api-access-zvvh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.543782 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.543956 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.556433 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "26a0fa4d-3430-4477-beae-2b0fa9819756" (UID: "26a0fa4d-3430-4477-beae-2b0fa9819756"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.637493 4878 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.637539 4878 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/26a0fa4d-3430-4477-beae-2b0fa9819756-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.637548 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvvh9\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-kube-api-access-zvvh9\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.637559 4878 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/26a0fa4d-3430-4477-beae-2b0fa9819756-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:17 crc kubenswrapper[4878]: I1204 15:41:17.637571 4878 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/26a0fa4d-3430-4477-beae-2b0fa9819756-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:18 crc kubenswrapper[4878]: I1204 15:41:18.268500 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" event={"ID":"26a0fa4d-3430-4477-beae-2b0fa9819756","Type":"ContainerDied","Data":"ed5b7e753ba684ac5b81067bd4bb09e880a9e27aa64183d675688fb265e38297"} Dec 04 15:41:18 crc kubenswrapper[4878]: I1204 15:41:18.268608 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fkp9z" Dec 04 15:41:18 crc kubenswrapper[4878]: I1204 15:41:18.268645 4878 scope.go:117] "RemoveContainer" containerID="6293ed127c808400dfe5a23c6d843a64d79e4e46a728f5aee4eb1a6dc330f4a7" Dec 04 15:41:18 crc kubenswrapper[4878]: I1204 15:41:18.297577 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:41:18 crc kubenswrapper[4878]: I1204 15:41:18.301540 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fkp9z"] Dec 04 15:41:19 crc kubenswrapper[4878]: I1204 15:41:19.186815 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26a0fa4d-3430-4477-beae-2b0fa9819756" path="/var/lib/kubelet/pods/26a0fa4d-3430-4477-beae-2b0fa9819756/volumes" Dec 04 15:41:23 crc kubenswrapper[4878]: I1204 15:41:23.247208 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 04 15:41:25 crc kubenswrapper[4878]: I1204 15:41:25.041287 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:41:25 crc kubenswrapper[4878]: I1204 15:41:25.041574 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerName="controller-manager" containerID="cri-o://38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0" gracePeriod=30 Dec 04 15:41:25 crc kubenswrapper[4878]: I1204 15:41:25.156302 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:41:25 crc kubenswrapper[4878]: I1204 15:41:25.156632 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" containerName="route-controller-manager" containerID="cri-o://a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32" gracePeriod=30 Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.298944 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.306889 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.329952 4878 generic.go:334] "Generic (PLEG): container finished" podID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerID="38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0" exitCode=0 Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330036 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" event={"ID":"1c451d04-9071-4d89-a6aa-a26e07523cf6","Type":"ContainerDied","Data":"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0"} Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330080 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" event={"ID":"1c451d04-9071-4d89-a6aa-a26e07523cf6","Type":"ContainerDied","Data":"ccbea1e0b0175015845d800176616a2b2e9e899f3ad79712fbee468c18c5557d"} Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330080 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-br92t" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330119 4878 scope.go:117] "RemoveContainer" containerID="38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330446 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:41:27 crc kubenswrapper[4878]: E1204 15:41:27.330751 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" containerName="route-controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330772 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" containerName="route-controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: E1204 15:41:27.330784 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerName="controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330792 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerName="controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: E1204 15:41:27.330835 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a0fa4d-3430-4477-beae-2b0fa9819756" containerName="registry" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.330850 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a0fa4d-3430-4477-beae-2b0fa9819756" containerName="registry" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.331080 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" containerName="route-controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.331100 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a0fa4d-3430-4477-beae-2b0fa9819756" containerName="registry" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.331111 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" containerName="controller-manager" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.331765 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.332387 4878 generic.go:334] "Generic (PLEG): container finished" podID="7689249c-9002-4b86-ba80-67bff6b584c4" containerID="a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32" exitCode=0 Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.332426 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" event={"ID":"7689249c-9002-4b86-ba80-67bff6b584c4","Type":"ContainerDied","Data":"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32"} Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.332450 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" event={"ID":"7689249c-9002-4b86-ba80-67bff6b584c4","Type":"ContainerDied","Data":"0f5ce35a81193bac13a22d0b635a1f0909fc9621dcde9105294c3d6962294eb9"} Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.332431 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.343009 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.356513 4878 scope.go:117] "RemoveContainer" containerID="38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0" Dec 04 15:41:27 crc kubenswrapper[4878]: E1204 15:41:27.356861 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0\": container with ID starting with 38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0 not found: ID does not exist" containerID="38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.356935 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0"} err="failed to get container status \"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0\": rpc error: code = NotFound desc = could not find container \"38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0\": container with ID starting with 38a9db649ca6a69219180f9f8b2b582266c7b3727c710ff12b2d4ae9e3239db0 not found: ID does not exist" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.356979 4878 scope.go:117] "RemoveContainer" containerID="a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374139 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config\") pod \"7689249c-9002-4b86-ba80-67bff6b584c4\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374204 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert\") pod \"7689249c-9002-4b86-ba80-67bff6b584c4\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374261 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles\") pod \"1c451d04-9071-4d89-a6aa-a26e07523cf6\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374285 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config\") pod \"1c451d04-9071-4d89-a6aa-a26e07523cf6\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374329 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ljpl\" (UniqueName: \"kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl\") pod \"7689249c-9002-4b86-ba80-67bff6b584c4\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374352 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca\") pod \"7689249c-9002-4b86-ba80-67bff6b584c4\" (UID: \"7689249c-9002-4b86-ba80-67bff6b584c4\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374388 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert\") pod \"1c451d04-9071-4d89-a6aa-a26e07523cf6\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374403 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrwxf\" (UniqueName: \"kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf\") pod \"1c451d04-9071-4d89-a6aa-a26e07523cf6\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374437 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca\") pod \"1c451d04-9071-4d89-a6aa-a26e07523cf6\" (UID: \"1c451d04-9071-4d89-a6aa-a26e07523cf6\") " Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374605 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374697 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374727 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374776 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.374823 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.376124 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca" (OuterVolumeSpecName: "client-ca") pod "1c451d04-9071-4d89-a6aa-a26e07523cf6" (UID: "1c451d04-9071-4d89-a6aa-a26e07523cf6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.376182 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1c451d04-9071-4d89-a6aa-a26e07523cf6" (UID: "1c451d04-9071-4d89-a6aa-a26e07523cf6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.376629 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "7689249c-9002-4b86-ba80-67bff6b584c4" (UID: "7689249c-9002-4b86-ba80-67bff6b584c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.376781 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config" (OuterVolumeSpecName: "config") pod "1c451d04-9071-4d89-a6aa-a26e07523cf6" (UID: "1c451d04-9071-4d89-a6aa-a26e07523cf6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.376826 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config" (OuterVolumeSpecName: "config") pod "7689249c-9002-4b86-ba80-67bff6b584c4" (UID: "7689249c-9002-4b86-ba80-67bff6b584c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.381061 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf" (OuterVolumeSpecName: "kube-api-access-qrwxf") pod "1c451d04-9071-4d89-a6aa-a26e07523cf6" (UID: "1c451d04-9071-4d89-a6aa-a26e07523cf6"). InnerVolumeSpecName "kube-api-access-qrwxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.382620 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1c451d04-9071-4d89-a6aa-a26e07523cf6" (UID: "1c451d04-9071-4d89-a6aa-a26e07523cf6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.383101 4878 scope.go:117] "RemoveContainer" containerID="a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32" Dec 04 15:41:27 crc kubenswrapper[4878]: E1204 15:41:27.383584 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32\": container with ID starting with a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32 not found: ID does not exist" containerID="a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.383699 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32"} err="failed to get container status \"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32\": rpc error: code = NotFound desc = could not find container \"a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32\": container with ID starting with a906d3732891099abf4d203f63e129e9db6864eb6786102460ad960b0d08ec32 not found: ID does not exist" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.388637 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl" (OuterVolumeSpecName: "kube-api-access-6ljpl") pod "7689249c-9002-4b86-ba80-67bff6b584c4" (UID: "7689249c-9002-4b86-ba80-67bff6b584c4"). InnerVolumeSpecName "kube-api-access-6ljpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.389009 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7689249c-9002-4b86-ba80-67bff6b584c4" (UID: "7689249c-9002-4b86-ba80-67bff6b584c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.475687 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.475774 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.476699 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.476775 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.476804 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482719 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482745 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689249c-9002-4b86-ba80-67bff6b584c4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482766 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482778 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482789 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ljpl\" (UniqueName: \"kubernetes.io/projected/7689249c-9002-4b86-ba80-67bff6b584c4-kube-api-access-6ljpl\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482804 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689249c-9002-4b86-ba80-67bff6b584c4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482815 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1c451d04-9071-4d89-a6aa-a26e07523cf6-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482829 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrwxf\" (UniqueName: \"kubernetes.io/projected/1c451d04-9071-4d89-a6aa-a26e07523cf6-kube-api-access-qrwxf\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.482841 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1c451d04-9071-4d89-a6aa-a26e07523cf6-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.490182 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.490443 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.490609 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.490766 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.499784 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt\") pod \"controller-manager-868cbf6d4b-4ldws\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.650141 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.665657 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.671510 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-br92t"] Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.681236 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.685349 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zvbhg"] Dec 04 15:41:27 crc kubenswrapper[4878]: I1204 15:41:27.854301 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:41:28 crc kubenswrapper[4878]: I1204 15:41:28.340584 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" event={"ID":"3b5a9e4e-3339-4ff9-98b7-579d36cf197f","Type":"ContainerStarted","Data":"9e795c60917a0f9c91cb949d23a6028613ecb5f7f5f37489008ed5bf18db735f"} Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.186846 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c451d04-9071-4d89-a6aa-a26e07523cf6" path="/var/lib/kubelet/pods/1c451d04-9071-4d89-a6aa-a26e07523cf6/volumes" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.187704 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7689249c-9002-4b86-ba80-67bff6b584c4" path="/var/lib/kubelet/pods/7689249c-9002-4b86-ba80-67bff6b584c4/volumes" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.352223 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" event={"ID":"3b5a9e4e-3339-4ff9-98b7-579d36cf197f","Type":"ContainerStarted","Data":"e63f4d1367722cad05308ccc3a4dde520957ba2347fafe837d016b4488072eea"} Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.352560 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.357084 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.374496 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" podStartSLOduration=4.374474344 podStartE2EDuration="4.374474344s" podCreationTimestamp="2025-12-04 15:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:41:29.37153406 +0000 UTC m=+333.334071026" watchObservedRunningTime="2025-12-04 15:41:29.374474344 +0000 UTC m=+333.337011300" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.533939 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.534831 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.536478 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.536794 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.536834 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.536807 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.537669 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.538338 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.551572 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.610860 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.610929 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.610975 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.611045 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt7hx\" (UniqueName: \"kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.712049 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.712213 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt7hx\" (UniqueName: \"kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.712288 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.712341 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.713242 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.713403 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.718078 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.731237 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt7hx\" (UniqueName: \"kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx\") pod \"route-controller-manager-6ff59bd9df-k4txd\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:29 crc kubenswrapper[4878]: I1204 15:41:29.907842 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:30 crc kubenswrapper[4878]: I1204 15:41:30.316250 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:41:30 crc kubenswrapper[4878]: W1204 15:41:30.328471 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaaacc42_3fb1_4fa2_9c5a_a2c333e87edc.slice/crio-1b234a1c54d7ee6b9988e89da91f64c1b0a747dc9eecaca35b1988bf398b640b WatchSource:0}: Error finding container 1b234a1c54d7ee6b9988e89da91f64c1b0a747dc9eecaca35b1988bf398b640b: Status 404 returned error can't find the container with id 1b234a1c54d7ee6b9988e89da91f64c1b0a747dc9eecaca35b1988bf398b640b Dec 04 15:41:30 crc kubenswrapper[4878]: I1204 15:41:30.367055 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" event={"ID":"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc","Type":"ContainerStarted","Data":"1b234a1c54d7ee6b9988e89da91f64c1b0a747dc9eecaca35b1988bf398b640b"} Dec 04 15:41:31 crc kubenswrapper[4878]: I1204 15:41:31.374589 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" event={"ID":"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc","Type":"ContainerStarted","Data":"9c5db2deaa3e5a33e8d375af55550c8d29e578d8068ca8534a276b27cc745536"} Dec 04 15:41:31 crc kubenswrapper[4878]: I1204 15:41:31.374984 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:31 crc kubenswrapper[4878]: I1204 15:41:31.391430 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" podStartSLOduration=6.391408525 podStartE2EDuration="6.391408525s" podCreationTimestamp="2025-12-04 15:41:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:41:31.390440071 +0000 UTC m=+335.352977047" watchObservedRunningTime="2025-12-04 15:41:31.391408525 +0000 UTC m=+335.353945481" Dec 04 15:41:31 crc kubenswrapper[4878]: I1204 15:41:31.650375 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:41:45 crc kubenswrapper[4878]: I1204 15:41:45.825052 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 04 15:42:00 crc kubenswrapper[4878]: I1204 15:42:00.840694 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:42:00 crc kubenswrapper[4878]: I1204 15:42:00.841191 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:42:05 crc kubenswrapper[4878]: I1204 15:42:05.068918 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:42:05 crc kubenswrapper[4878]: I1204 15:42:05.069589 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" podUID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" containerName="controller-manager" containerID="cri-o://e63f4d1367722cad05308ccc3a4dde520957ba2347fafe837d016b4488072eea" gracePeriod=30 Dec 04 15:42:05 crc kubenswrapper[4878]: I1204 15:42:05.092970 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:42:05 crc kubenswrapper[4878]: I1204 15:42:05.093359 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" podUID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" containerName="route-controller-manager" containerID="cri-o://9c5db2deaa3e5a33e8d375af55550c8d29e578d8068ca8534a276b27cc745536" gracePeriod=30 Dec 04 15:42:06 crc kubenswrapper[4878]: I1204 15:42:06.587007 4878 generic.go:334] "Generic (PLEG): container finished" podID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" containerID="9c5db2deaa3e5a33e8d375af55550c8d29e578d8068ca8534a276b27cc745536" exitCode=0 Dec 04 15:42:06 crc kubenswrapper[4878]: I1204 15:42:06.589970 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" event={"ID":"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc","Type":"ContainerDied","Data":"9c5db2deaa3e5a33e8d375af55550c8d29e578d8068ca8534a276b27cc745536"} Dec 04 15:42:06 crc kubenswrapper[4878]: I1204 15:42:06.593532 4878 generic.go:334] "Generic (PLEG): container finished" podID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" containerID="e63f4d1367722cad05308ccc3a4dde520957ba2347fafe837d016b4488072eea" exitCode=0 Dec 04 15:42:06 crc kubenswrapper[4878]: I1204 15:42:06.593630 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" event={"ID":"3b5a9e4e-3339-4ff9-98b7-579d36cf197f","Type":"ContainerDied","Data":"e63f4d1367722cad05308ccc3a4dde520957ba2347fafe837d016b4488072eea"} Dec 04 15:42:06 crc kubenswrapper[4878]: I1204 15:42:06.910516 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.920023 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.937917 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678c4b645b-rqwg6"] Dec 04 15:42:08 crc kubenswrapper[4878]: E1204 15:42:06.938225 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" containerName="route-controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.938242 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" containerName="route-controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: E1204 15:42:06.938255 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" containerName="controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.938262 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" containerName="controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.938391 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" containerName="route-controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.938405 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" containerName="controller-manager" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.938993 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:06.948537 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678c4b645b-rqwg6"] Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022732 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config\") pod \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022791 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert\") pod \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022886 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles\") pod \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022920 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca\") pod \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022978 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert\") pod \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.022995 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca\") pod \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.023018 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt7hx\" (UniqueName: \"kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx\") pod \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.023073 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt\") pod \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\" (UID: \"3b5a9e4e-3339-4ff9-98b7-579d36cf197f\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.023101 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config\") pod \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\" (UID: \"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc\") " Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.023913 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config" (OuterVolumeSpecName: "config") pod "3b5a9e4e-3339-4ff9-98b7-579d36cf197f" (UID: "3b5a9e4e-3339-4ff9-98b7-579d36cf197f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.024242 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.024626 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "3b5a9e4e-3339-4ff9-98b7-579d36cf197f" (UID: "3b5a9e4e-3339-4ff9-98b7-579d36cf197f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.024985 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca" (OuterVolumeSpecName: "client-ca") pod "3b5a9e4e-3339-4ff9-98b7-579d36cf197f" (UID: "3b5a9e4e-3339-4ff9-98b7-579d36cf197f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.025178 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config" (OuterVolumeSpecName: "config") pod "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" (UID: "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.025487 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca" (OuterVolumeSpecName: "client-ca") pod "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" (UID: "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.036141 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" (UID: "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.036190 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx" (OuterVolumeSpecName: "kube-api-access-zt7hx") pod "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" (UID: "eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc"). InnerVolumeSpecName "kube-api-access-zt7hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.036199 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3b5a9e4e-3339-4ff9-98b7-579d36cf197f" (UID: "3b5a9e4e-3339-4ff9-98b7-579d36cf197f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.036220 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt" (OuterVolumeSpecName: "kube-api-access-wxdxt") pod "3b5a9e4e-3339-4ff9-98b7-579d36cf197f" (UID: "3b5a9e4e-3339-4ff9-98b7-579d36cf197f"). InnerVolumeSpecName "kube-api-access-wxdxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125017 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-client-ca\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125072 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e3503-0912-4ae0-b860-c009cd71dbc8-serving-cert\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125098 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnn5v\" (UniqueName: \"kubernetes.io/projected/ed1e3503-0912-4ae0-b860-c009cd71dbc8-kube-api-access-vnn5v\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125148 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-proxy-ca-bundles\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125185 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-config\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125458 4878 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125507 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125518 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125529 4878 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-client-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125540 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt7hx\" (UniqueName: \"kubernetes.io/projected/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-kube-api-access-zt7hx\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125555 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdxt\" (UniqueName: \"kubernetes.io/projected/3b5a9e4e-3339-4ff9-98b7-579d36cf197f-kube-api-access-wxdxt\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125568 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.125580 4878 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.226731 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e3503-0912-4ae0-b860-c009cd71dbc8-serving-cert\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.227087 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnn5v\" (UniqueName: \"kubernetes.io/projected/ed1e3503-0912-4ae0-b860-c009cd71dbc8-kube-api-access-vnn5v\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.227133 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-proxy-ca-bundles\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.227174 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-config\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.227260 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-client-ca\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.228953 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-client-ca\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.229243 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-config\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.229253 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed1e3503-0912-4ae0-b860-c009cd71dbc8-proxy-ca-bundles\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.232347 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed1e3503-0912-4ae0-b860-c009cd71dbc8-serving-cert\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.245415 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnn5v\" (UniqueName: \"kubernetes.io/projected/ed1e3503-0912-4ae0-b860-c009cd71dbc8-kube-api-access-vnn5v\") pod \"controller-manager-678c4b645b-rqwg6\" (UID: \"ed1e3503-0912-4ae0-b860-c009cd71dbc8\") " pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.264453 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.602258 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.602264 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd" event={"ID":"eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc","Type":"ContainerDied","Data":"1b234a1c54d7ee6b9988e89da91f64c1b0a747dc9eecaca35b1988bf398b640b"} Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.602444 4878 scope.go:117] "RemoveContainer" containerID="9c5db2deaa3e5a33e8d375af55550c8d29e578d8068ca8534a276b27cc745536" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.607621 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" event={"ID":"3b5a9e4e-3339-4ff9-98b7-579d36cf197f","Type":"ContainerDied","Data":"9e795c60917a0f9c91cb949d23a6028613ecb5f7f5f37489008ed5bf18db735f"} Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.607725 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-868cbf6d4b-4ldws" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.629238 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.631191 4878 scope.go:117] "RemoveContainer" containerID="e63f4d1367722cad05308ccc3a4dde520957ba2347fafe837d016b4488072eea" Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.637099 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-868cbf6d4b-4ldws"] Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.645898 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:07.651942 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6ff59bd9df-k4txd"] Dec 04 15:42:08 crc kubenswrapper[4878]: I1204 15:42:08.784913 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678c4b645b-rqwg6"] Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.186239 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b5a9e4e-3339-4ff9-98b7-579d36cf197f" path="/var/lib/kubelet/pods/3b5a9e4e-3339-4ff9-98b7-579d36cf197f/volumes" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.187310 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc" path="/var/lib/kubelet/pods/eaaacc42-3fb1-4fa2-9c5a-a2c333e87edc/volumes" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.570735 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt"] Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.572970 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.575322 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.575503 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.579923 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.580564 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.580691 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.585255 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt"] Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.589276 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.634370 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" event={"ID":"ed1e3503-0912-4ae0-b860-c009cd71dbc8","Type":"ContainerStarted","Data":"eb5a4d1a0c3b1e03be67fd9758adeed2d8c3dbf921229ff357364a192cc67117"} Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.634540 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" event={"ID":"ed1e3503-0912-4ae0-b860-c009cd71dbc8","Type":"ContainerStarted","Data":"42518edc64925003db670c4818dae18829e6c987c8cb6984f2ec8a423fc2f938"} Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.634728 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.639929 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.661333 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678c4b645b-rqwg6" podStartSLOduration=4.661302923 podStartE2EDuration="4.661302923s" podCreationTimestamp="2025-12-04 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:42:09.660606945 +0000 UTC m=+373.623143901" watchObservedRunningTime="2025-12-04 15:42:09.661302923 +0000 UTC m=+373.623839879" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.760931 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pphw8\" (UniqueName: \"kubernetes.io/projected/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-kube-api-access-pphw8\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.761353 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-client-ca\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.761407 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.761429 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-config\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.863596 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pphw8\" (UniqueName: \"kubernetes.io/projected/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-kube-api-access-pphw8\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.863680 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-client-ca\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.863731 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.863764 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-config\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.864908 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-client-ca\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.865082 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-config\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.869885 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-serving-cert\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.881208 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pphw8\" (UniqueName: \"kubernetes.io/projected/f44e3e1f-6a4c-4f74-b862-53ecb05ebc13-kube-api-access-pphw8\") pod \"route-controller-manager-6dbd89b5dd-c7wrt\" (UID: \"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13\") " pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:09 crc kubenswrapper[4878]: I1204 15:42:09.899452 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:10 crc kubenswrapper[4878]: I1204 15:42:10.304294 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt"] Dec 04 15:42:10 crc kubenswrapper[4878]: W1204 15:42:10.315255 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf44e3e1f_6a4c_4f74_b862_53ecb05ebc13.slice/crio-e664d26579311651d1d067f3f2d171bf77dd6722d1a582de7895240964ff4e55 WatchSource:0}: Error finding container e664d26579311651d1d067f3f2d171bf77dd6722d1a582de7895240964ff4e55: Status 404 returned error can't find the container with id e664d26579311651d1d067f3f2d171bf77dd6722d1a582de7895240964ff4e55 Dec 04 15:42:10 crc kubenswrapper[4878]: I1204 15:42:10.642471 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" event={"ID":"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13","Type":"ContainerStarted","Data":"e664d26579311651d1d067f3f2d171bf77dd6722d1a582de7895240964ff4e55"} Dec 04 15:42:11 crc kubenswrapper[4878]: I1204 15:42:11.649991 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" event={"ID":"f44e3e1f-6a4c-4f74-b862-53ecb05ebc13","Type":"ContainerStarted","Data":"58e10e402a5586a94591afeef76a386169baf3fa71ba179187acc9dadc48add6"} Dec 04 15:42:11 crc kubenswrapper[4878]: I1204 15:42:11.667995 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" podStartSLOduration=6.667956296 podStartE2EDuration="6.667956296s" podCreationTimestamp="2025-12-04 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:42:11.666144651 +0000 UTC m=+375.628691127" watchObservedRunningTime="2025-12-04 15:42:11.667956296 +0000 UTC m=+375.630493262" Dec 04 15:42:12 crc kubenswrapper[4878]: I1204 15:42:12.655382 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:12 crc kubenswrapper[4878]: I1204 15:42:12.660047 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dbd89b5dd-c7wrt" Dec 04 15:42:28 crc kubenswrapper[4878]: I1204 15:42:28.321568 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:42:28 crc kubenswrapper[4878]: I1204 15:42:28.322816 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:42:28 crc kubenswrapper[4878]: I1204 15:42:28.323668 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:42:28 crc kubenswrapper[4878]: I1204 15:42:28.334443 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:42:28 crc kubenswrapper[4878]: I1204 15:42:28.581084 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 04 15:42:29 crc kubenswrapper[4878]: I1204 15:42:29.758065 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4010592b0b2bddbe95dc115c8bc39a01118dccb17bb0bdbae869d74ff716897f"} Dec 04 15:42:29 crc kubenswrapper[4878]: I1204 15:42:29.758727 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0a009b08faa1797088cb7ac58c5d00afa6c8de9a9d06680a79aaadd97dfd152a"} Dec 04 15:42:30 crc kubenswrapper[4878]: I1204 15:42:30.840164 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:42:30 crc kubenswrapper[4878]: I1204 15:42:30.840276 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:43:00 crc kubenswrapper[4878]: I1204 15:43:00.840221 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:43:00 crc kubenswrapper[4878]: I1204 15:43:00.840747 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:43:00 crc kubenswrapper[4878]: I1204 15:43:00.840806 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:43:00 crc kubenswrapper[4878]: I1204 15:43:00.841466 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:43:00 crc kubenswrapper[4878]: I1204 15:43:00.841535 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60" gracePeriod=600 Dec 04 15:43:01 crc kubenswrapper[4878]: I1204 15:43:01.932787 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60" exitCode=0 Dec 04 15:43:01 crc kubenswrapper[4878]: I1204 15:43:01.932920 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60"} Dec 04 15:43:01 crc kubenswrapper[4878]: I1204 15:43:01.933397 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5"} Dec 04 15:43:01 crc kubenswrapper[4878]: I1204 15:43:01.933418 4878 scope.go:117] "RemoveContainer" containerID="cb362576aa6868dd25e09bc593a56a1d4aae670f7ed34fb8948a68992008553d" Dec 04 15:44:57 crc kubenswrapper[4878]: I1204 15:44:57.447951 4878 scope.go:117] "RemoveContainer" containerID="e838de45a687a1c64fa153c01c85c9bb9c1185c3459e9176f708886281923aa0" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.169998 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9"] Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.171227 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.173723 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.173851 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.179842 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9"] Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.182671 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.183907 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.184069 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkgg\" (UniqueName: \"kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.284745 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.285303 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.285420 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkgg\" (UniqueName: \"kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.286621 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.293536 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.306926 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkgg\" (UniqueName: \"kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg\") pod \"collect-profiles-29414385-rvfz9\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.487112 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.680339 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9"] Dec 04 15:45:00 crc kubenswrapper[4878]: I1204 15:45:00.795684 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" event={"ID":"d2145ebc-f247-4f3a-9843-ad32f23fa61c","Type":"ContainerStarted","Data":"eed3943c23fbea1e8c0cce0a0942a354199bbcbe6b55f4116caf5a2c35254f9d"} Dec 04 15:45:01 crc kubenswrapper[4878]: I1204 15:45:01.801849 4878 generic.go:334] "Generic (PLEG): container finished" podID="d2145ebc-f247-4f3a-9843-ad32f23fa61c" containerID="929051d28420808246af3dc8bd1173fd8ad9da7a4f7b6c6d743021cfe0c0c025" exitCode=0 Dec 04 15:45:01 crc kubenswrapper[4878]: I1204 15:45:01.801932 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" event={"ID":"d2145ebc-f247-4f3a-9843-ad32f23fa61c","Type":"ContainerDied","Data":"929051d28420808246af3dc8bd1173fd8ad9da7a4f7b6c6d743021cfe0c0c025"} Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.010494 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.122937 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume\") pod \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.123041 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume\") pod \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.123103 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkgg\" (UniqueName: \"kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg\") pod \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\" (UID: \"d2145ebc-f247-4f3a-9843-ad32f23fa61c\") " Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.123474 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume" (OuterVolumeSpecName: "config-volume") pod "d2145ebc-f247-4f3a-9843-ad32f23fa61c" (UID: "d2145ebc-f247-4f3a-9843-ad32f23fa61c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.128754 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg" (OuterVolumeSpecName: "kube-api-access-xmkgg") pod "d2145ebc-f247-4f3a-9843-ad32f23fa61c" (UID: "d2145ebc-f247-4f3a-9843-ad32f23fa61c"). InnerVolumeSpecName "kube-api-access-xmkgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.128771 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d2145ebc-f247-4f3a-9843-ad32f23fa61c" (UID: "d2145ebc-f247-4f3a-9843-ad32f23fa61c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.224217 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d2145ebc-f247-4f3a-9843-ad32f23fa61c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.224258 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmkgg\" (UniqueName: \"kubernetes.io/projected/d2145ebc-f247-4f3a-9843-ad32f23fa61c-kube-api-access-xmkgg\") on node \"crc\" DevicePath \"\"" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.224267 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2145ebc-f247-4f3a-9843-ad32f23fa61c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.813565 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" event={"ID":"d2145ebc-f247-4f3a-9843-ad32f23fa61c","Type":"ContainerDied","Data":"eed3943c23fbea1e8c0cce0a0942a354199bbcbe6b55f4116caf5a2c35254f9d"} Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.813831 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eed3943c23fbea1e8c0cce0a0942a354199bbcbe6b55f4116caf5a2c35254f9d" Dec 04 15:45:03 crc kubenswrapper[4878]: I1204 15:45:03.813628 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9" Dec 04 15:45:30 crc kubenswrapper[4878]: I1204 15:45:30.840345 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:45:30 crc kubenswrapper[4878]: I1204 15:45:30.840951 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:46:00 crc kubenswrapper[4878]: I1204 15:46:00.840854 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:46:00 crc kubenswrapper[4878]: I1204 15:46:00.841679 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:46:30 crc kubenswrapper[4878]: I1204 15:46:30.840770 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:46:30 crc kubenswrapper[4878]: I1204 15:46:30.841318 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:46:30 crc kubenswrapper[4878]: I1204 15:46:30.841377 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:46:30 crc kubenswrapper[4878]: I1204 15:46:30.842034 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:46:30 crc kubenswrapper[4878]: I1204 15:46:30.842110 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5" gracePeriod=600 Dec 04 15:46:31 crc kubenswrapper[4878]: I1204 15:46:31.328099 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5" exitCode=0 Dec 04 15:46:31 crc kubenswrapper[4878]: I1204 15:46:31.328169 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5"} Dec 04 15:46:31 crc kubenswrapper[4878]: I1204 15:46:31.328428 4878 scope.go:117] "RemoveContainer" containerID="aa732be55c172daa9ee825f6a000d49abcb7b598e573f5ef19d017ddfe51de60" Dec 04 15:46:32 crc kubenswrapper[4878]: I1204 15:46:32.338824 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca"} Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.947862 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zjf7m"] Dec 04 15:47:50 crc kubenswrapper[4878]: E1204 15:47:50.948970 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2145ebc-f247-4f3a-9843-ad32f23fa61c" containerName="collect-profiles" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.948996 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2145ebc-f247-4f3a-9843-ad32f23fa61c" containerName="collect-profiles" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.949127 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2145ebc-f247-4f3a-9843-ad32f23fa61c" containerName="collect-profiles" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.949659 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.953404 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jq2p4"] Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.953888 4878 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-crr22" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.953959 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.953988 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.954131 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jq2p4" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.972558 4878 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2gr49" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.975459 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jq2p4"] Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.983767 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-6w6cz"] Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.984741 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.987110 4878 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-c4b8k" Dec 04 15:47:50 crc kubenswrapper[4878]: I1204 15:47:50.988984 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zjf7m"] Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.000037 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-6w6cz"] Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.106146 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvhzp\" (UniqueName: \"kubernetes.io/projected/f338f7a0-f59d-4f56-8f51-e9aade039feb-kube-api-access-zvhzp\") pod \"cert-manager-webhook-5655c58dd6-6w6cz\" (UID: \"f338f7a0-f59d-4f56-8f51-e9aade039feb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.106202 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79qsh\" (UniqueName: \"kubernetes.io/projected/e4d3c25f-014d-4d4e-aff9-291289e798f8-kube-api-access-79qsh\") pod \"cert-manager-cainjector-7f985d654d-zjf7m\" (UID: \"e4d3c25f-014d-4d4e-aff9-291289e798f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.106225 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsp8\" (UniqueName: \"kubernetes.io/projected/64e1efea-99bb-4630-82dc-b90418609577-kube-api-access-sbsp8\") pod \"cert-manager-5b446d88c5-jq2p4\" (UID: \"64e1efea-99bb-4630-82dc-b90418609577\") " pod="cert-manager/cert-manager-5b446d88c5-jq2p4" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.207743 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvhzp\" (UniqueName: \"kubernetes.io/projected/f338f7a0-f59d-4f56-8f51-e9aade039feb-kube-api-access-zvhzp\") pod \"cert-manager-webhook-5655c58dd6-6w6cz\" (UID: \"f338f7a0-f59d-4f56-8f51-e9aade039feb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.207788 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79qsh\" (UniqueName: \"kubernetes.io/projected/e4d3c25f-014d-4d4e-aff9-291289e798f8-kube-api-access-79qsh\") pod \"cert-manager-cainjector-7f985d654d-zjf7m\" (UID: \"e4d3c25f-014d-4d4e-aff9-291289e798f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.207814 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsp8\" (UniqueName: \"kubernetes.io/projected/64e1efea-99bb-4630-82dc-b90418609577-kube-api-access-sbsp8\") pod \"cert-manager-5b446d88c5-jq2p4\" (UID: \"64e1efea-99bb-4630-82dc-b90418609577\") " pod="cert-manager/cert-manager-5b446d88c5-jq2p4" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.228628 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79qsh\" (UniqueName: \"kubernetes.io/projected/e4d3c25f-014d-4d4e-aff9-291289e798f8-kube-api-access-79qsh\") pod \"cert-manager-cainjector-7f985d654d-zjf7m\" (UID: \"e4d3c25f-014d-4d4e-aff9-291289e798f8\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.228902 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsp8\" (UniqueName: \"kubernetes.io/projected/64e1efea-99bb-4630-82dc-b90418609577-kube-api-access-sbsp8\") pod \"cert-manager-5b446d88c5-jq2p4\" (UID: \"64e1efea-99bb-4630-82dc-b90418609577\") " pod="cert-manager/cert-manager-5b446d88c5-jq2p4" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.228945 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvhzp\" (UniqueName: \"kubernetes.io/projected/f338f7a0-f59d-4f56-8f51-e9aade039feb-kube-api-access-zvhzp\") pod \"cert-manager-webhook-5655c58dd6-6w6cz\" (UID: \"f338f7a0-f59d-4f56-8f51-e9aade039feb\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.267136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.272808 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-jq2p4" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.296971 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.606916 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-6w6cz"] Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.613100 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.735735 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-zjf7m"] Dec 04 15:47:51 crc kubenswrapper[4878]: W1204 15:47:51.747751 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64e1efea_99bb_4630_82dc_b90418609577.slice/crio-f96e3f398b311df5cf0b26873df882d679f5315f9d767b7f6604bc05527c274a WatchSource:0}: Error finding container f96e3f398b311df5cf0b26873df882d679f5315f9d767b7f6604bc05527c274a: Status 404 returned error can't find the container with id f96e3f398b311df5cf0b26873df882d679f5315f9d767b7f6604bc05527c274a Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.748079 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-jq2p4"] Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.771953 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" event={"ID":"f338f7a0-f59d-4f56-8f51-e9aade039feb","Type":"ContainerStarted","Data":"41d9e13d65b2c354256203b4f6383d6b69396730bcc32dce710aff274e05b975"} Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.773059 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jq2p4" event={"ID":"64e1efea-99bb-4630-82dc-b90418609577","Type":"ContainerStarted","Data":"f96e3f398b311df5cf0b26873df882d679f5315f9d767b7f6604bc05527c274a"} Dec 04 15:47:51 crc kubenswrapper[4878]: I1204 15:47:51.774042 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" event={"ID":"e4d3c25f-014d-4d4e-aff9-291289e798f8","Type":"ContainerStarted","Data":"620571df672aec695d49ecebe8f357bf98f67184f37db7afdd67c69ce97e0d42"} Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.802574 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" event={"ID":"e4d3c25f-014d-4d4e-aff9-291289e798f8","Type":"ContainerStarted","Data":"34625de03a4fca781e9f89607b1ea02cae34bbedfb9513a433bd03b6af3d7e63"} Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.804685 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" event={"ID":"f338f7a0-f59d-4f56-8f51-e9aade039feb","Type":"ContainerStarted","Data":"5c591a083132697579ac9f8728f5f416c5ae69b9c7c6ec228f8f50dc5399efa5"} Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.805628 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.808253 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-jq2p4" event={"ID":"64e1efea-99bb-4630-82dc-b90418609577","Type":"ContainerStarted","Data":"b505a269b9c3eacc2bd63bce37a1e3d46e8d353062aa6dedc14114774998c856"} Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.842895 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" podStartSLOduration=2.057828342 podStartE2EDuration="6.842861778s" podCreationTimestamp="2025-12-04 15:47:50 +0000 UTC" firstStartedPulling="2025-12-04 15:47:51.612852495 +0000 UTC m=+715.575389451" lastFinishedPulling="2025-12-04 15:47:56.397885931 +0000 UTC m=+720.360422887" observedRunningTime="2025-12-04 15:47:56.84100299 +0000 UTC m=+720.803539946" watchObservedRunningTime="2025-12-04 15:47:56.842861778 +0000 UTC m=+720.805398734" Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.844438 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-zjf7m" podStartSLOduration=1.971770604 podStartE2EDuration="6.844428249s" podCreationTimestamp="2025-12-04 15:47:50 +0000 UTC" firstStartedPulling="2025-12-04 15:47:51.73919559 +0000 UTC m=+715.701732546" lastFinishedPulling="2025-12-04 15:47:56.611853235 +0000 UTC m=+720.574390191" observedRunningTime="2025-12-04 15:47:56.82470524 +0000 UTC m=+720.787242186" watchObservedRunningTime="2025-12-04 15:47:56.844428249 +0000 UTC m=+720.806965225" Dec 04 15:47:56 crc kubenswrapper[4878]: I1204 15:47:56.872750 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-jq2p4" podStartSLOduration=2.226158433 podStartE2EDuration="6.87273079s" podCreationTimestamp="2025-12-04 15:47:50 +0000 UTC" firstStartedPulling="2025-12-04 15:47:51.750237166 +0000 UTC m=+715.712774132" lastFinishedPulling="2025-12-04 15:47:56.396809533 +0000 UTC m=+720.359346489" observedRunningTime="2025-12-04 15:47:56.871258852 +0000 UTC m=+720.833795808" watchObservedRunningTime="2025-12-04 15:47:56.87273079 +0000 UTC m=+720.835267746" Dec 04 15:48:01 crc kubenswrapper[4878]: I1204 15:48:01.299931 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-6w6cz" Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.949839 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzptn"] Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950359 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-controller" containerID="cri-o://4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950538 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950484 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="northd" containerID="cri-o://c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950605 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-node" containerID="cri-o://4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950543 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="nbdb" containerID="cri-o://5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.950653 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-acl-logging" containerID="cri-o://b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.951572 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="sbdb" containerID="cri-o://f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" gracePeriod=30 Dec 04 15:48:08 crc kubenswrapper[4878]: I1204 15:48:08.985980 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" containerID="cri-o://2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" gracePeriod=30 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.631064 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/3.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.633350 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovn-acl-logging/0.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.633836 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovn-controller/0.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.634399 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.693953 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zms4n"] Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694205 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694219 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694232 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="sbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694242 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="sbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694253 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-node" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694262 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-node" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694277 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694284 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694292 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-acl-logging" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694299 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-acl-logging" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694314 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694322 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694334 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="northd" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694341 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="northd" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694353 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694361 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694370 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="nbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694377 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="nbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694387 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694486 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694500 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kubecfg-setup" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694508 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kubecfg-setup" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694618 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-node" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694633 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="sbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694643 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694651 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694660 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="northd" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694672 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="nbdb" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694682 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="kube-rbac-proxy-ovn-metrics" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694689 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694699 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovn-acl-logging" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694708 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694830 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694892 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: E1204 15:48:09.694910 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.694919 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.695030 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.695239 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerName="ovnkube-controller" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.696763 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770105 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770155 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770195 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770214 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770243 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770249 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log" (OuterVolumeSpecName: "node-log") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770264 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770286 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770297 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770288 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash" (OuterVolumeSpecName: "host-slash") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770312 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770394 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770341 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770436 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770368 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770370 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770455 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770473 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770509 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770532 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770560 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770603 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770619 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770634 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770653 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770675 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770680 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770706 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770724 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770750 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nxwl\" (UniqueName: \"kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770773 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770824 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch\") pod \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\" (UID: \"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e\") " Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770861 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket" (OuterVolumeSpecName: "log-socket") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.770979 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771044 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-env-overrides\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771091 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771134 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-bin\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771187 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-etc-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771216 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-var-lib-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771258 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771280 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-script-lib\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771280 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771307 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-netns\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771343 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84zc9\" (UniqueName: \"kubernetes.io/projected/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-kube-api-access-84zc9\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771294 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771394 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-log-socket\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771457 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-systemd-units\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771489 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-systemd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771564 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-kubelet\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771627 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-slash\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771646 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-ovn\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771675 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovn-node-metrics-cert\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771677 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771764 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-netd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771787 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-node-log\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771934 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.771983 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-config\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772081 4878 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772095 4878 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-node-log\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772105 4878 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-slash\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772113 4878 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772121 4878 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772129 4878 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772140 4878 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772149 4878 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772158 4878 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772166 4878 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772174 4878 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772181 4878 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772191 4878 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772200 4878 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772208 4878 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772216 4878 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.772225 4878 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-log-socket\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.776054 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl" (OuterVolumeSpecName: "kube-api-access-4nxwl") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "kube-api-access-4nxwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.776275 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.783816 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" (UID: "5b6e8498-be44-4b9c-9dd3-dc08f9515f2e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874000 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84zc9\" (UniqueName: \"kubernetes.io/projected/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-kube-api-access-84zc9\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874113 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-log-socket\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874151 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-systemd-units\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874177 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-systemd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874202 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-kubelet\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874244 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-slash\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874263 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-ovn\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874288 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovn-node-metrics-cert\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874296 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-log-socket\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874315 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-netd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874337 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-node-log\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874330 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-systemd-units\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874393 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-kubelet\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874398 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874374 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-systemd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874367 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874352 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-slash\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874442 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-ovn\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874553 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-netd\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874563 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-node-log\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874591 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-config\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874617 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-env-overrides\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874638 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874656 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-bin\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874672 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-etc-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874687 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-var-lib-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874709 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874724 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-script-lib\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874741 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-netns\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874780 4878 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874792 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nxwl\" (UniqueName: \"kubernetes.io/projected/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-kube-api-access-4nxwl\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874801 4878 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.874827 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-run-netns\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875128 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-etc-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875180 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875147 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-run-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875221 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-var-lib-openvswitch\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875232 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-host-cni-bin\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875714 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-config\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.875768 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-env-overrides\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.876312 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovnkube-script-lib\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.877941 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-ovn-node-metrics-cert\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.891642 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovnkube-controller/3.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.893095 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84zc9\" (UniqueName: \"kubernetes.io/projected/f34265c7-5f6c-4bdc-a188-777fd0c05d6b-kube-api-access-84zc9\") pod \"ovnkube-node-zms4n\" (UID: \"f34265c7-5f6c-4bdc-a188-777fd0c05d6b\") " pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.894729 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovn-acl-logging/0.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.895543 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qzptn_5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/ovn-controller/0.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896059 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896109 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896121 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896130 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896139 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896147 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" exitCode=0 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896156 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" exitCode=143 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896156 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896168 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896224 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896242 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896256 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896270 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896284 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896299 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896318 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896326 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896333 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896334 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896342 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896164 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" exitCode=143 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896690 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896717 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896724 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896730 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896753 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896779 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896787 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896793 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896800 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896805 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896810 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896818 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896823 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896828 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896833 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896841 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896849 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896856 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896862 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896890 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896896 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896902 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896907 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896913 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896919 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896926 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896935 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qzptn" event={"ID":"5b6e8498-be44-4b9c-9dd3-dc08f9515f2e","Type":"ContainerDied","Data":"92827045b9819297e4d561b980b595eeec4c764e7c27bcf1c8abcfe798d4544a"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896945 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896957 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896965 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896973 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896982 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.896990 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.897001 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.897008 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.897015 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.897022 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.899336 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/2.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.899989 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/1.log" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.900033 4878 generic.go:334] "Generic (PLEG): container finished" podID="c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757" containerID="b7abb0abe7f56ff1bdcd8c17582bd214dee727f1f4d519f3197514b6c583a0ad" exitCode=2 Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.900074 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerDied","Data":"b7abb0abe7f56ff1bdcd8c17582bd214dee727f1f4d519f3197514b6c583a0ad"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.900101 4878 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0"} Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.902510 4878 scope.go:117] "RemoveContainer" containerID="b7abb0abe7f56ff1bdcd8c17582bd214dee727f1f4d519f3197514b6c583a0ad" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.919982 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.966407 4878 scope.go:117] "RemoveContainer" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.972270 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzptn"] Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.977471 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qzptn"] Dec 04 15:48:09 crc kubenswrapper[4878]: I1204 15:48:09.995257 4878 scope.go:117] "RemoveContainer" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.016242 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.017283 4878 scope.go:117] "RemoveContainer" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.031520 4878 scope.go:117] "RemoveContainer" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: W1204 15:48:10.041156 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf34265c7_5f6c_4bdc_a188_777fd0c05d6b.slice/crio-31eb5abb746e8dbfb473361265089f441f728f3db71e28c98e1281703d9f8ddd WatchSource:0}: Error finding container 31eb5abb746e8dbfb473361265089f441f728f3db71e28c98e1281703d9f8ddd: Status 404 returned error can't find the container with id 31eb5abb746e8dbfb473361265089f441f728f3db71e28c98e1281703d9f8ddd Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.049636 4878 scope.go:117] "RemoveContainer" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.067668 4878 scope.go:117] "RemoveContainer" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.116208 4878 scope.go:117] "RemoveContainer" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.138804 4878 scope.go:117] "RemoveContainer" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.151601 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.152272 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.152305 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} err="failed to get container status \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.152329 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.152654 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": container with ID starting with f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135 not found: ID does not exist" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.152705 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} err="failed to get container status \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": rpc error: code = NotFound desc = could not find container \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": container with ID starting with f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.152784 4878 scope.go:117] "RemoveContainer" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.153139 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": container with ID starting with f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db not found: ID does not exist" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153164 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} err="failed to get container status \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": rpc error: code = NotFound desc = could not find container \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": container with ID starting with f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153179 4878 scope.go:117] "RemoveContainer" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.153484 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": container with ID starting with 5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec not found: ID does not exist" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153508 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} err="failed to get container status \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": rpc error: code = NotFound desc = could not find container \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": container with ID starting with 5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153521 4878 scope.go:117] "RemoveContainer" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.153853 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": container with ID starting with c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616 not found: ID does not exist" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153888 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} err="failed to get container status \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": rpc error: code = NotFound desc = could not find container \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": container with ID starting with c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.153903 4878 scope.go:117] "RemoveContainer" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.154158 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": container with ID starting with 288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e not found: ID does not exist" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154189 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} err="failed to get container status \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": rpc error: code = NotFound desc = could not find container \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": container with ID starting with 288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154212 4878 scope.go:117] "RemoveContainer" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.154510 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": container with ID starting with 4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd not found: ID does not exist" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154533 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} err="failed to get container status \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": rpc error: code = NotFound desc = could not find container \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": container with ID starting with 4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154547 4878 scope.go:117] "RemoveContainer" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.154802 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": container with ID starting with b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f not found: ID does not exist" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154820 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} err="failed to get container status \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": rpc error: code = NotFound desc = could not find container \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": container with ID starting with b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.154839 4878 scope.go:117] "RemoveContainer" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.155214 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": container with ID starting with 4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31 not found: ID does not exist" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155243 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} err="failed to get container status \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": rpc error: code = NotFound desc = could not find container \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": container with ID starting with 4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155261 4878 scope.go:117] "RemoveContainer" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: E1204 15:48:10.155488 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": container with ID starting with 14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305 not found: ID does not exist" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155508 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} err="failed to get container status \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": rpc error: code = NotFound desc = could not find container \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": container with ID starting with 14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155522 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155801 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} err="failed to get container status \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.155847 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.156108 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} err="failed to get container status \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": rpc error: code = NotFound desc = could not find container \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": container with ID starting with f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.156127 4878 scope.go:117] "RemoveContainer" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.156397 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} err="failed to get container status \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": rpc error: code = NotFound desc = could not find container \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": container with ID starting with f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.156445 4878 scope.go:117] "RemoveContainer" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.157445 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} err="failed to get container status \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": rpc error: code = NotFound desc = could not find container \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": container with ID starting with 5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.157464 4878 scope.go:117] "RemoveContainer" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.157758 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} err="failed to get container status \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": rpc error: code = NotFound desc = could not find container \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": container with ID starting with c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.157784 4878 scope.go:117] "RemoveContainer" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.160207 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} err="failed to get container status \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": rpc error: code = NotFound desc = could not find container \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": container with ID starting with 288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.160227 4878 scope.go:117] "RemoveContainer" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.160638 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} err="failed to get container status \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": rpc error: code = NotFound desc = could not find container \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": container with ID starting with 4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.160664 4878 scope.go:117] "RemoveContainer" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.160985 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} err="failed to get container status \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": rpc error: code = NotFound desc = could not find container \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": container with ID starting with b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.161034 4878 scope.go:117] "RemoveContainer" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.161685 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} err="failed to get container status \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": rpc error: code = NotFound desc = could not find container \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": container with ID starting with 4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.161715 4878 scope.go:117] "RemoveContainer" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.161958 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} err="failed to get container status \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": rpc error: code = NotFound desc = could not find container \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": container with ID starting with 14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.161981 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162277 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} err="failed to get container status \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162301 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162503 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} err="failed to get container status \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": rpc error: code = NotFound desc = could not find container \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": container with ID starting with f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162525 4878 scope.go:117] "RemoveContainer" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162759 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} err="failed to get container status \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": rpc error: code = NotFound desc = could not find container \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": container with ID starting with f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.162784 4878 scope.go:117] "RemoveContainer" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163013 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} err="failed to get container status \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": rpc error: code = NotFound desc = could not find container \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": container with ID starting with 5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163032 4878 scope.go:117] "RemoveContainer" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163278 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} err="failed to get container status \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": rpc error: code = NotFound desc = could not find container \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": container with ID starting with c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163301 4878 scope.go:117] "RemoveContainer" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163561 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} err="failed to get container status \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": rpc error: code = NotFound desc = could not find container \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": container with ID starting with 288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163581 4878 scope.go:117] "RemoveContainer" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163794 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} err="failed to get container status \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": rpc error: code = NotFound desc = could not find container \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": container with ID starting with 4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.163815 4878 scope.go:117] "RemoveContainer" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164079 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} err="failed to get container status \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": rpc error: code = NotFound desc = could not find container \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": container with ID starting with b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164099 4878 scope.go:117] "RemoveContainer" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164361 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} err="failed to get container status \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": rpc error: code = NotFound desc = could not find container \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": container with ID starting with 4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164381 4878 scope.go:117] "RemoveContainer" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164581 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} err="failed to get container status \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": rpc error: code = NotFound desc = could not find container \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": container with ID starting with 14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164617 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164816 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} err="failed to get container status \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.164834 4878 scope.go:117] "RemoveContainer" containerID="f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165238 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135"} err="failed to get container status \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": rpc error: code = NotFound desc = could not find container \"f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135\": container with ID starting with f58990388b9666723abc4e1a1b31d6887cb376e72c2810013d649073bc996135 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165254 4878 scope.go:117] "RemoveContainer" containerID="f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165505 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db"} err="failed to get container status \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": rpc error: code = NotFound desc = could not find container \"f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db\": container with ID starting with f0d8a52287fcca27997f7043bf1d6c528664adf0f14f90d5fa648cff2959c8db not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165523 4878 scope.go:117] "RemoveContainer" containerID="5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165788 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec"} err="failed to get container status \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": rpc error: code = NotFound desc = could not find container \"5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec\": container with ID starting with 5fdd1db886d03af1b20c6b69ac51f1c6ac50989b993c76c6717418c10f7f6fec not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.165804 4878 scope.go:117] "RemoveContainer" containerID="c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166026 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616"} err="failed to get container status \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": rpc error: code = NotFound desc = could not find container \"c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616\": container with ID starting with c5f03217ef3ecae63ebb0e865d23b54d39de8e76de0a8339b10ffc07cb271616 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166049 4878 scope.go:117] "RemoveContainer" containerID="288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166312 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e"} err="failed to get container status \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": rpc error: code = NotFound desc = could not find container \"288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e\": container with ID starting with 288316c1cead565bd9c573197447c20792e7a84466b06d4d21600b10de83402e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166345 4878 scope.go:117] "RemoveContainer" containerID="4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166618 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd"} err="failed to get container status \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": rpc error: code = NotFound desc = could not find container \"4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd\": container with ID starting with 4d365f46bb3f6013fa14c0b91c40d78f5fdfc28944af186ad706c09e47bafcbd not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.166693 4878 scope.go:117] "RemoveContainer" containerID="b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167065 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f"} err="failed to get container status \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": rpc error: code = NotFound desc = could not find container \"b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f\": container with ID starting with b7339a3573d9398b4492acae4d16756386652c4608c3f472e533482f8e67576f not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167092 4878 scope.go:117] "RemoveContainer" containerID="4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167490 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31"} err="failed to get container status \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": rpc error: code = NotFound desc = could not find container \"4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31\": container with ID starting with 4713afc24bedf33689af44aa87889b17f57e438adc3b8d5100c1f84cab232b31 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167507 4878 scope.go:117] "RemoveContainer" containerID="14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167720 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305"} err="failed to get container status \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": rpc error: code = NotFound desc = could not find container \"14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305\": container with ID starting with 14783f90e9053bd5da2b417870944696d4ecfedc16ea13f9eaf654e8360c9305 not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.167738 4878 scope.go:117] "RemoveContainer" containerID="2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.168038 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e"} err="failed to get container status \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": rpc error: code = NotFound desc = could not find container \"2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e\": container with ID starting with 2d032b0d101a8684c93dc2fd82cadc2df8281db9c422c40459e5aa358ec34e9e not found: ID does not exist" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.910463 4878 generic.go:334] "Generic (PLEG): container finished" podID="f34265c7-5f6c-4bdc-a188-777fd0c05d6b" containerID="e5ee97bcbd622b13861056296fdcb559689f198986e6ddf17ee8f7309acc8a76" exitCode=0 Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.910530 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerDied","Data":"e5ee97bcbd622b13861056296fdcb559689f198986e6ddf17ee8f7309acc8a76"} Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.910754 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"31eb5abb746e8dbfb473361265089f441f728f3db71e28c98e1281703d9f8ddd"} Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.916911 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/2.log" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.917618 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/1.log" Dec 04 15:48:10 crc kubenswrapper[4878]: I1204 15:48:10.917682 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9p8p7" event={"ID":"c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757","Type":"ContainerStarted","Data":"224fb911f10e83f2b064ffb406533462b09340ea33f4f1d9ef4ed305f267e44e"} Dec 04 15:48:11 crc kubenswrapper[4878]: I1204 15:48:11.188212 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b6e8498-be44-4b9c-9dd3-dc08f9515f2e" path="/var/lib/kubelet/pods/5b6e8498-be44-4b9c-9dd3-dc08f9515f2e/volumes" Dec 04 15:48:11 crc kubenswrapper[4878]: I1204 15:48:11.928915 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"abf17f07af92214889adb1be720944c04a6fa4d8feded8e2718d31c26faa5ae6"} Dec 04 15:48:11 crc kubenswrapper[4878]: I1204 15:48:11.928969 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"a98c2ef7905618641efbc172343830fa5e9b81483e937f47566485d4e7d2501d"} Dec 04 15:48:11 crc kubenswrapper[4878]: I1204 15:48:11.928983 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"f2407c8b7344fbdda8a84cd1b6ab79ad82dca0bfd755640b79f4e6b26ecc4ddb"} Dec 04 15:48:12 crc kubenswrapper[4878]: I1204 15:48:12.940134 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"0ecbeb8d0ac34cae4f926d256cd9db0e0b7d306a48add86d3b3bc7a4ee62fb6e"} Dec 04 15:48:12 crc kubenswrapper[4878]: I1204 15:48:12.940698 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"e4730004500725ec23425007f7ece2694ed3bcda4d75ef8b62a562d5032ac039"} Dec 04 15:48:14 crc kubenswrapper[4878]: I1204 15:48:14.960333 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"39c96cecfec4675157c76f605c105b17b565b5a5f0b93c743a9618cd809d463e"} Dec 04 15:48:18 crc kubenswrapper[4878]: I1204 15:48:18.983235 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"4b0fe4c08831f87dca161c41a1cb8c5bdac3018a8d25bce728e640fb4980c41a"} Dec 04 15:48:19 crc kubenswrapper[4878]: I1204 15:48:19.991610 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" event={"ID":"f34265c7-5f6c-4bdc-a188-777fd0c05d6b","Type":"ContainerStarted","Data":"a144fc10cd2e76fdf9e53c92d9149ec4acec42d71ddbbec799cd532579dc7b91"} Dec 04 15:48:19 crc kubenswrapper[4878]: I1204 15:48:19.992178 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:19 crc kubenswrapper[4878]: I1204 15:48:19.992318 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:19 crc kubenswrapper[4878]: I1204 15:48:19.992353 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:20 crc kubenswrapper[4878]: I1204 15:48:20.022393 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:20 crc kubenswrapper[4878]: I1204 15:48:20.022437 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" podStartSLOduration=11.022419256 podStartE2EDuration="11.022419256s" podCreationTimestamp="2025-12-04 15:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:48:20.018238228 +0000 UTC m=+743.980775184" watchObservedRunningTime="2025-12-04 15:48:20.022419256 +0000 UTC m=+743.984956212" Dec 04 15:48:20 crc kubenswrapper[4878]: I1204 15:48:20.025429 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:40 crc kubenswrapper[4878]: I1204 15:48:40.047320 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zms4n" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.561012 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26"] Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.562940 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.564754 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26"] Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.565437 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.649909 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.650021 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.650053 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr4km\" (UniqueName: \"kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.751405 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.751600 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.751634 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr4km\" (UniqueName: \"kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.752324 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.752533 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.780690 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr4km\" (UniqueName: \"kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:47 crc kubenswrapper[4878]: I1204 15:48:47.878612 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:48 crc kubenswrapper[4878]: I1204 15:48:48.459920 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26"] Dec 04 15:48:48 crc kubenswrapper[4878]: I1204 15:48:48.567142 4878 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.234636 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerStarted","Data":"79d2f8f27c0532f86fe0ac078dc1e471be3547d3473531c35b834bfe33f2da67"} Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.234983 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerStarted","Data":"35a633915c86e474d29f0d9ffdc4976ed41575657b4348351d8b4a47f2206d16"} Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.293999 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.295682 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.298128 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.319389 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.319484 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.319516 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thp65\" (UniqueName: \"kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.421201 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.421295 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thp65\" (UniqueName: \"kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.421361 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.421817 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.421971 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.443861 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thp65\" (UniqueName: \"kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65\") pod \"redhat-operators-hslqd\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.627059 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:49 crc kubenswrapper[4878]: I1204 15:48:49.955419 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:48:50 crc kubenswrapper[4878]: I1204 15:48:50.242208 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fcf3677-183c-4081-8750-41633a9b0e00" containerID="39975d8af432237700ad0d32cbcd61a972423bdc1a1fb3701964c3f4ba753179" exitCode=0 Dec 04 15:48:50 crc kubenswrapper[4878]: I1204 15:48:50.242313 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerDied","Data":"39975d8af432237700ad0d32cbcd61a972423bdc1a1fb3701964c3f4ba753179"} Dec 04 15:48:50 crc kubenswrapper[4878]: I1204 15:48:50.242628 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerStarted","Data":"844719e9220d55126b9558cabf3c8dfad271685f6a5c157b7c36da2f1c116fa2"} Dec 04 15:48:50 crc kubenswrapper[4878]: I1204 15:48:50.244676 4878 generic.go:334] "Generic (PLEG): container finished" podID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerID="79d2f8f27c0532f86fe0ac078dc1e471be3547d3473531c35b834bfe33f2da67" exitCode=0 Dec 04 15:48:50 crc kubenswrapper[4878]: I1204 15:48:50.244726 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerDied","Data":"79d2f8f27c0532f86fe0ac078dc1e471be3547d3473531c35b834bfe33f2da67"} Dec 04 15:48:51 crc kubenswrapper[4878]: I1204 15:48:51.401434 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerStarted","Data":"9783b358064fd5f3b061cb975fa79e5aaff36a04b10d60e053ac6e29f5024e17"} Dec 04 15:48:52 crc kubenswrapper[4878]: I1204 15:48:52.409968 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerStarted","Data":"8c7f2a843149777f414255c6375b4f2c33c820f90d1e270602c941fda48296f4"} Dec 04 15:48:53 crc kubenswrapper[4878]: I1204 15:48:53.438843 4878 generic.go:334] "Generic (PLEG): container finished" podID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerID="8c7f2a843149777f414255c6375b4f2c33c820f90d1e270602c941fda48296f4" exitCode=0 Dec 04 15:48:53 crc kubenswrapper[4878]: I1204 15:48:53.438909 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerDied","Data":"8c7f2a843149777f414255c6375b4f2c33c820f90d1e270602c941fda48296f4"} Dec 04 15:48:54 crc kubenswrapper[4878]: I1204 15:48:54.447309 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerStarted","Data":"ce270c72052df7b3bc5df68142a6493a4efc9fbb123cd8a5d37ffa4b5d1486a1"} Dec 04 15:48:54 crc kubenswrapper[4878]: I1204 15:48:54.467531 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" podStartSLOduration=5.612895213 podStartE2EDuration="7.467511024s" podCreationTimestamp="2025-12-04 15:48:47 +0000 UTC" firstStartedPulling="2025-12-04 15:48:50.246607153 +0000 UTC m=+774.209144109" lastFinishedPulling="2025-12-04 15:48:52.101222964 +0000 UTC m=+776.063759920" observedRunningTime="2025-12-04 15:48:54.463996804 +0000 UTC m=+778.426533770" watchObservedRunningTime="2025-12-04 15:48:54.467511024 +0000 UTC m=+778.430047980" Dec 04 15:48:55 crc kubenswrapper[4878]: I1204 15:48:55.455302 4878 generic.go:334] "Generic (PLEG): container finished" podID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerID="ce270c72052df7b3bc5df68142a6493a4efc9fbb123cd8a5d37ffa4b5d1486a1" exitCode=0 Dec 04 15:48:55 crc kubenswrapper[4878]: I1204 15:48:55.455359 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerDied","Data":"ce270c72052df7b3bc5df68142a6493a4efc9fbb123cd8a5d37ffa4b5d1486a1"} Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.689767 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.780253 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lr4km\" (UniqueName: \"kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km\") pod \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.780385 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle\") pod \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.780435 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util\") pod \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\" (UID: \"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7\") " Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.780980 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle" (OuterVolumeSpecName: "bundle") pod "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" (UID: "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.790614 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km" (OuterVolumeSpecName: "kube-api-access-lr4km") pod "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" (UID: "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7"). InnerVolumeSpecName "kube-api-access-lr4km". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.796634 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util" (OuterVolumeSpecName: "util") pod "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" (UID: "3f27b5b8-26d9-405a-9c12-4ce85d5fcec7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.882248 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lr4km\" (UniqueName: \"kubernetes.io/projected/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-kube-api-access-lr4km\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.882277 4878 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:56 crc kubenswrapper[4878]: I1204 15:48:56.882285 4878 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f27b5b8-26d9-405a-9c12-4ce85d5fcec7-util\") on node \"crc\" DevicePath \"\"" Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.469454 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fcf3677-183c-4081-8750-41633a9b0e00" containerID="9783b358064fd5f3b061cb975fa79e5aaff36a04b10d60e053ac6e29f5024e17" exitCode=0 Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.469544 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerDied","Data":"9783b358064fd5f3b061cb975fa79e5aaff36a04b10d60e053ac6e29f5024e17"} Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.474378 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" event={"ID":"3f27b5b8-26d9-405a-9c12-4ce85d5fcec7","Type":"ContainerDied","Data":"35a633915c86e474d29f0d9ffdc4976ed41575657b4348351d8b4a47f2206d16"} Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.474446 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a633915c86e474d29f0d9ffdc4976ed41575657b4348351d8b4a47f2206d16" Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.474483 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26" Dec 04 15:48:57 crc kubenswrapper[4878]: I1204 15:48:57.524480 4878 scope.go:117] "RemoveContainer" containerID="9ded2db4a6013c707819a36aaf49fbf97fc452b6ba71bf9e15ed363a2c7eede0" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.483057 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerStarted","Data":"b6d9a7cfe2b8a3ed55d2301dba9d0df3faa64517cfe741fea0669a25292b6cd7"} Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.485586 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9p8p7_c13e3dc9-eb06-42c3-98c3-ce6c5ccd4757/kube-multus/2.log" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.514318 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hslqd" podStartSLOduration=1.888755202 podStartE2EDuration="9.514286341s" podCreationTimestamp="2025-12-04 15:48:49 +0000 UTC" firstStartedPulling="2025-12-04 15:48:50.244018276 +0000 UTC m=+774.206555232" lastFinishedPulling="2025-12-04 15:48:57.869549415 +0000 UTC m=+781.832086371" observedRunningTime="2025-12-04 15:48:58.509680622 +0000 UTC m=+782.472217588" watchObservedRunningTime="2025-12-04 15:48:58.514286341 +0000 UTC m=+782.476823297" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.546805 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn"] Dec 04 15:48:58 crc kubenswrapper[4878]: E1204 15:48:58.547166 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="extract" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.547189 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="extract" Dec 04 15:48:58 crc kubenswrapper[4878]: E1204 15:48:58.547213 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="util" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.547221 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="util" Dec 04 15:48:58 crc kubenswrapper[4878]: E1204 15:48:58.547237 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="pull" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.547247 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="pull" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.547374 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f27b5b8-26d9-405a-9c12-4ce85d5fcec7" containerName="extract" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.547921 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.550010 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.550261 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xc42z" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.550639 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.566550 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn"] Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.699746 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drz2b\" (UniqueName: \"kubernetes.io/projected/e20ad695-cd00-478d-9e02-662d9bceb1a5-kube-api-access-drz2b\") pod \"nmstate-operator-5b5b58f5c8-n7ffn\" (UID: \"e20ad695-cd00-478d-9e02-662d9bceb1a5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.801309 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drz2b\" (UniqueName: \"kubernetes.io/projected/e20ad695-cd00-478d-9e02-662d9bceb1a5-kube-api-access-drz2b\") pod \"nmstate-operator-5b5b58f5c8-n7ffn\" (UID: \"e20ad695-cd00-478d-9e02-662d9bceb1a5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.822363 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drz2b\" (UniqueName: \"kubernetes.io/projected/e20ad695-cd00-478d-9e02-662d9bceb1a5-kube-api-access-drz2b\") pod \"nmstate-operator-5b5b58f5c8-n7ffn\" (UID: \"e20ad695-cd00-478d-9e02-662d9bceb1a5\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" Dec 04 15:48:58 crc kubenswrapper[4878]: I1204 15:48:58.861902 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" Dec 04 15:48:59 crc kubenswrapper[4878]: I1204 15:48:59.627262 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:59 crc kubenswrapper[4878]: I1204 15:48:59.627759 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:48:59 crc kubenswrapper[4878]: I1204 15:48:59.637492 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn"] Dec 04 15:48:59 crc kubenswrapper[4878]: W1204 15:48:59.645897 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode20ad695_cd00_478d_9e02_662d9bceb1a5.slice/crio-e372038b75fa6b9bd68f6587e8fa5df3bb5f6bd06cb3c4e71b2ce98a77af9bdf WatchSource:0}: Error finding container e372038b75fa6b9bd68f6587e8fa5df3bb5f6bd06cb3c4e71b2ce98a77af9bdf: Status 404 returned error can't find the container with id e372038b75fa6b9bd68f6587e8fa5df3bb5f6bd06cb3c4e71b2ce98a77af9bdf Dec 04 15:49:00 crc kubenswrapper[4878]: I1204 15:49:00.498575 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" event={"ID":"e20ad695-cd00-478d-9e02-662d9bceb1a5","Type":"ContainerStarted","Data":"e372038b75fa6b9bd68f6587e8fa5df3bb5f6bd06cb3c4e71b2ce98a77af9bdf"} Dec 04 15:49:00 crc kubenswrapper[4878]: I1204 15:49:00.671330 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hslqd" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="registry-server" probeResult="failure" output=< Dec 04 15:49:00 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 15:49:00 crc kubenswrapper[4878]: > Dec 04 15:49:00 crc kubenswrapper[4878]: I1204 15:49:00.840142 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:49:00 crc kubenswrapper[4878]: I1204 15:49:00.840213 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:49:06 crc kubenswrapper[4878]: I1204 15:49:06.538168 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" event={"ID":"e20ad695-cd00-478d-9e02-662d9bceb1a5","Type":"ContainerStarted","Data":"3799c23f3e07650866b0e5fd0571e450d26d609892bb68cfe587dede47dfcc4d"} Dec 04 15:49:06 crc kubenswrapper[4878]: I1204 15:49:06.560495 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-n7ffn" podStartSLOduration=3.116807141 podStartE2EDuration="8.5604667s" podCreationTimestamp="2025-12-04 15:48:58 +0000 UTC" firstStartedPulling="2025-12-04 15:48:59.650379092 +0000 UTC m=+783.612916048" lastFinishedPulling="2025-12-04 15:49:05.094038651 +0000 UTC m=+789.056575607" observedRunningTime="2025-12-04 15:49:06.557024571 +0000 UTC m=+790.519561527" watchObservedRunningTime="2025-12-04 15:49:06.5604667 +0000 UTC m=+790.523003656" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.552299 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.553681 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.564298 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.565750 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.571012 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.574339 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.574668 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-s84xl" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.583738 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.591164 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-mcflh"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.592397 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630505 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630565 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-nmstate-lock\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630611 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-dbus-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630640 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r5k6\" (UniqueName: \"kubernetes.io/projected/836684ff-7d25-4bdb-82ba-130f9a37da2b-kube-api-access-7r5k6\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630730 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-ovs-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.630953 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7xbp\" (UniqueName: \"kubernetes.io/projected/c72c2914-f98d-4c5b-b885-16f7bcf2f793-kube-api-access-l7xbp\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.631030 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpkh\" (UniqueName: \"kubernetes.io/projected/f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b-kube-api-access-vzpkh\") pod \"nmstate-metrics-7f946cbc9-m2m8b\" (UID: \"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732265 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7xbp\" (UniqueName: \"kubernetes.io/projected/c72c2914-f98d-4c5b-b885-16f7bcf2f793-kube-api-access-l7xbp\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732377 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzpkh\" (UniqueName: \"kubernetes.io/projected/f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b-kube-api-access-vzpkh\") pod \"nmstate-metrics-7f946cbc9-m2m8b\" (UID: \"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732457 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732489 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-nmstate-lock\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732531 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-dbus-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732559 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r5k6\" (UniqueName: \"kubernetes.io/projected/836684ff-7d25-4bdb-82ba-130f9a37da2b-kube-api-access-7r5k6\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-ovs-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: E1204 15:49:07.732632 4878 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732692 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-ovs-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: E1204 15:49:07.732719 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair podName:836684ff-7d25-4bdb-82ba-130f9a37da2b nodeName:}" failed. No retries permitted until 2025-12-04 15:49:08.232690303 +0000 UTC m=+792.195227339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-fxdj9" (UID: "836684ff-7d25-4bdb-82ba-130f9a37da2b") : secret "openshift-nmstate-webhook" not found Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732917 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-nmstate-lock\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.732937 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/c72c2914-f98d-4c5b-b885-16f7bcf2f793-dbus-socket\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.735104 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.735968 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.738074 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g4rb9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.738956 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.742580 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.755722 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.763092 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r5k6\" (UniqueName: \"kubernetes.io/projected/836684ff-7d25-4bdb-82ba-130f9a37da2b-kube-api-access-7r5k6\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.767191 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7xbp\" (UniqueName: \"kubernetes.io/projected/c72c2914-f98d-4c5b-b885-16f7bcf2f793-kube-api-access-l7xbp\") pod \"nmstate-handler-mcflh\" (UID: \"c72c2914-f98d-4c5b-b885-16f7bcf2f793\") " pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.798152 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzpkh\" (UniqueName: \"kubernetes.io/projected/f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b-kube-api-access-vzpkh\") pod \"nmstate-metrics-7f946cbc9-m2m8b\" (UID: \"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.833923 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmq7d\" (UniqueName: \"kubernetes.io/projected/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-kube-api-access-dmq7d\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.834350 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.834712 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.908564 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.936664 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.936801 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmq7d\" (UniqueName: \"kubernetes.io/projected/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-kube-api-access-dmq7d\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.936835 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.938453 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.943374 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.944063 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-bc6845dd9-z7ll7"] Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.945238 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.959648 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.967690 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmq7d\" (UniqueName: \"kubernetes.io/projected/5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d-kube-api-access-dmq7d\") pod \"nmstate-console-plugin-7fbb5f6569-k7kd2\" (UID: \"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:07 crc kubenswrapper[4878]: I1204 15:49:07.982695 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc6845dd9-z7ll7"] Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038563 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038699 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-oauth-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038766 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-oauth-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038807 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-trusted-ca-bundle\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038833 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-service-ca\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038859 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.038895 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfdd9\" (UniqueName: \"kubernetes.io/projected/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-kube-api-access-rfdd9\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.063923 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140241 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-service-ca\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140292 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140313 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfdd9\" (UniqueName: \"kubernetes.io/projected/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-kube-api-access-rfdd9\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140343 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140377 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-oauth-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140422 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-oauth-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.140456 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-trusted-ca-bundle\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.142675 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.142743 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-oauth-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.142966 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-service-ca\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.143200 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-trusted-ca-bundle\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.144466 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-oauth-config\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.145604 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-console-serving-cert\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.161263 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfdd9\" (UniqueName: \"kubernetes.io/projected/fd5e5b85-b9a8-4b54-88d6-aefb7224522e-kube-api-access-rfdd9\") pod \"console-bc6845dd9-z7ll7\" (UID: \"fd5e5b85-b9a8-4b54-88d6-aefb7224522e\") " pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.241367 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.246477 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/836684ff-7d25-4bdb-82ba-130f9a37da2b-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-fxdj9\" (UID: \"836684ff-7d25-4bdb-82ba-130f9a37da2b\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.309770 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.519421 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b"] Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.537969 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2"] Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.546108 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.561585 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" event={"ID":"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d","Type":"ContainerStarted","Data":"1502cd8aa70f922deda631e87d7aac21702d01b4f22f4797e1583bb3da4c583b"} Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.573862 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mcflh" event={"ID":"c72c2914-f98d-4c5b-b885-16f7bcf2f793","Type":"ContainerStarted","Data":"cdfea50365e678eeb4b60e74f280adf8752c9641624f4f836f3f85aeb6e1b258"} Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.575483 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" event={"ID":"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b","Type":"ContainerStarted","Data":"a40a2857cc3ef35ecac6f83162cbf90f951fa285463de098c8d0ddcccc3a80dd"} Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.850796 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9"] Dec 04 15:49:08 crc kubenswrapper[4878]: W1204 15:49:08.854308 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod836684ff_7d25_4bdb_82ba_130f9a37da2b.slice/crio-d34026be63261f58477d6353fdd33517e0e7ea35db3deffc66be6526fe75afed WatchSource:0}: Error finding container d34026be63261f58477d6353fdd33517e0e7ea35db3deffc66be6526fe75afed: Status 404 returned error can't find the container with id d34026be63261f58477d6353fdd33517e0e7ea35db3deffc66be6526fe75afed Dec 04 15:49:08 crc kubenswrapper[4878]: I1204 15:49:08.894041 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-bc6845dd9-z7ll7"] Dec 04 15:49:08 crc kubenswrapper[4878]: W1204 15:49:08.898721 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd5e5b85_b9a8_4b54_88d6_aefb7224522e.slice/crio-955869b7eb08140282befe48c22e1027f2243e79d8334f87ba6364fc2039a6d3 WatchSource:0}: Error finding container 955869b7eb08140282befe48c22e1027f2243e79d8334f87ba6364fc2039a6d3: Status 404 returned error can't find the container with id 955869b7eb08140282befe48c22e1027f2243e79d8334f87ba6364fc2039a6d3 Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.585076 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" event={"ID":"836684ff-7d25-4bdb-82ba-130f9a37da2b","Type":"ContainerStarted","Data":"d34026be63261f58477d6353fdd33517e0e7ea35db3deffc66be6526fe75afed"} Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.586711 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc6845dd9-z7ll7" event={"ID":"fd5e5b85-b9a8-4b54-88d6-aefb7224522e","Type":"ContainerStarted","Data":"ea7eac3beee8a46e226dc6120f19535ab411ddf547f5ba6602be64e73a267756"} Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.586734 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-bc6845dd9-z7ll7" event={"ID":"fd5e5b85-b9a8-4b54-88d6-aefb7224522e","Type":"ContainerStarted","Data":"955869b7eb08140282befe48c22e1027f2243e79d8334f87ba6364fc2039a6d3"} Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.613057 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-bc6845dd9-z7ll7" podStartSLOduration=2.613030788 podStartE2EDuration="2.613030788s" podCreationTimestamp="2025-12-04 15:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:49:09.608363108 +0000 UTC m=+793.570900064" watchObservedRunningTime="2025-12-04 15:49:09.613030788 +0000 UTC m=+793.575567744" Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.747498 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:49:09 crc kubenswrapper[4878]: I1204 15:49:09.961986 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:49:10 crc kubenswrapper[4878]: I1204 15:49:10.152590 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:49:11 crc kubenswrapper[4878]: I1204 15:49:11.603217 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hslqd" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="registry-server" containerID="cri-o://b6d9a7cfe2b8a3ed55d2301dba9d0df3faa64517cfe741fea0669a25292b6cd7" gracePeriod=2 Dec 04 15:49:12 crc kubenswrapper[4878]: I1204 15:49:12.612194 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fcf3677-183c-4081-8750-41633a9b0e00" containerID="b6d9a7cfe2b8a3ed55d2301dba9d0df3faa64517cfe741fea0669a25292b6cd7" exitCode=0 Dec 04 15:49:12 crc kubenswrapper[4878]: I1204 15:49:12.612236 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerDied","Data":"b6d9a7cfe2b8a3ed55d2301dba9d0df3faa64517cfe741fea0669a25292b6cd7"} Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.662818 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.843908 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thp65\" (UniqueName: \"kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65\") pod \"5fcf3677-183c-4081-8750-41633a9b0e00\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.843988 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content\") pod \"5fcf3677-183c-4081-8750-41633a9b0e00\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.844203 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities\") pod \"5fcf3677-183c-4081-8750-41633a9b0e00\" (UID: \"5fcf3677-183c-4081-8750-41633a9b0e00\") " Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.845472 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities" (OuterVolumeSpecName: "utilities") pod "5fcf3677-183c-4081-8750-41633a9b0e00" (UID: "5fcf3677-183c-4081-8750-41633a9b0e00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.850983 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65" (OuterVolumeSpecName: "kube-api-access-thp65") pod "5fcf3677-183c-4081-8750-41633a9b0e00" (UID: "5fcf3677-183c-4081-8750-41633a9b0e00"). InnerVolumeSpecName "kube-api-access-thp65". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.945571 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.945626 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thp65\" (UniqueName: \"kubernetes.io/projected/5fcf3677-183c-4081-8750-41633a9b0e00-kube-api-access-thp65\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:14 crc kubenswrapper[4878]: I1204 15:49:14.948018 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fcf3677-183c-4081-8750-41633a9b0e00" (UID: "5fcf3677-183c-4081-8750-41633a9b0e00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.049649 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fcf3677-183c-4081-8750-41633a9b0e00-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.631995 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" event={"ID":"836684ff-7d25-4bdb-82ba-130f9a37da2b","Type":"ContainerStarted","Data":"fd834fe4e504ced818190f017600a7ecac61ed59fb0c3d633c093b86ed3168d3"} Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.632146 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.633916 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" event={"ID":"5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d","Type":"ContainerStarted","Data":"657555c8c47720c4115a87cdebd59df41ebd3302772f0760e70123ac934e3099"} Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.640429 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-mcflh" event={"ID":"c72c2914-f98d-4c5b-b885-16f7bcf2f793","Type":"ContainerStarted","Data":"fe6f8eca1b04495a2bae40435e6b92a7cd93eccd3e6e5c2c1dee45f856819866"} Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.640563 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.641650 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" event={"ID":"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b","Type":"ContainerStarted","Data":"f131bc46ed51589b7b9461b66c7c41b690dda22492b63a8cf8b38888603881e8"} Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.643070 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hslqd" event={"ID":"5fcf3677-183c-4081-8750-41633a9b0e00","Type":"ContainerDied","Data":"844719e9220d55126b9558cabf3c8dfad271685f6a5c157b7c36da2f1c116fa2"} Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.643104 4878 scope.go:117] "RemoveContainer" containerID="b6d9a7cfe2b8a3ed55d2301dba9d0df3faa64517cfe741fea0669a25292b6cd7" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.643236 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hslqd" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.695717 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-k7kd2" podStartSLOduration=2.248092836 podStartE2EDuration="8.695691523s" podCreationTimestamp="2025-12-04 15:49:07 +0000 UTC" firstStartedPulling="2025-12-04 15:49:08.53902715 +0000 UTC m=+792.501564106" lastFinishedPulling="2025-12-04 15:49:14.986625827 +0000 UTC m=+798.949162793" observedRunningTime="2025-12-04 15:49:15.692720316 +0000 UTC m=+799.655257282" watchObservedRunningTime="2025-12-04 15:49:15.695691523 +0000 UTC m=+799.658228479" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.696663 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" podStartSLOduration=2.517918953 podStartE2EDuration="8.696655348s" podCreationTimestamp="2025-12-04 15:49:07 +0000 UTC" firstStartedPulling="2025-12-04 15:49:08.857096382 +0000 UTC m=+792.819633348" lastFinishedPulling="2025-12-04 15:49:15.035832777 +0000 UTC m=+798.998369743" observedRunningTime="2025-12-04 15:49:15.660941256 +0000 UTC m=+799.623478232" watchObservedRunningTime="2025-12-04 15:49:15.696655348 +0000 UTC m=+799.659192294" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.712053 4878 scope.go:117] "RemoveContainer" containerID="9783b358064fd5f3b061cb975fa79e5aaff36a04b10d60e053ac6e29f5024e17" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.722719 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-mcflh" podStartSLOduration=1.705834136 podStartE2EDuration="8.72269186s" podCreationTimestamp="2025-12-04 15:49:07 +0000 UTC" firstStartedPulling="2025-12-04 15:49:08.013729038 +0000 UTC m=+791.976265994" lastFinishedPulling="2025-12-04 15:49:15.030586762 +0000 UTC m=+798.993123718" observedRunningTime="2025-12-04 15:49:15.716782548 +0000 UTC m=+799.679319494" watchObservedRunningTime="2025-12-04 15:49:15.72269186 +0000 UTC m=+799.685228816" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.750150 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.753118 4878 scope.go:117] "RemoveContainer" containerID="39975d8af432237700ad0d32cbcd61a972423bdc1a1fb3701964c3f4ba753179" Dec 04 15:49:15 crc kubenswrapper[4878]: I1204 15:49:15.759651 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hslqd"] Dec 04 15:49:17 crc kubenswrapper[4878]: I1204 15:49:17.189646 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" path="/var/lib/kubelet/pods/5fcf3677-183c-4081-8750-41633a9b0e00/volumes" Dec 04 15:49:18 crc kubenswrapper[4878]: I1204 15:49:18.311116 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:18 crc kubenswrapper[4878]: I1204 15:49:18.311197 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:18 crc kubenswrapper[4878]: I1204 15:49:18.316475 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:18 crc kubenswrapper[4878]: I1204 15:49:18.672092 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-bc6845dd9-z7ll7" Dec 04 15:49:18 crc kubenswrapper[4878]: I1204 15:49:18.738679 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:49:20 crc kubenswrapper[4878]: I1204 15:49:20.678974 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" event={"ID":"f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b","Type":"ContainerStarted","Data":"0b7f3e64ba16891267aed6a2ef7b8cd57b3d81e7d810167829e85ef846bafd9d"} Dec 04 15:49:20 crc kubenswrapper[4878]: I1204 15:49:20.697325 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-m2m8b" podStartSLOduration=2.434439087 podStartE2EDuration="13.697305579s" podCreationTimestamp="2025-12-04 15:49:07 +0000 UTC" firstStartedPulling="2025-12-04 15:49:08.519801564 +0000 UTC m=+792.482338520" lastFinishedPulling="2025-12-04 15:49:19.782668056 +0000 UTC m=+803.745205012" observedRunningTime="2025-12-04 15:49:20.696328484 +0000 UTC m=+804.658865460" watchObservedRunningTime="2025-12-04 15:49:20.697305579 +0000 UTC m=+804.659842535" Dec 04 15:49:22 crc kubenswrapper[4878]: I1204 15:49:22.982584 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-mcflh" Dec 04 15:49:28 crc kubenswrapper[4878]: I1204 15:49:28.554239 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-fxdj9" Dec 04 15:49:30 crc kubenswrapper[4878]: I1204 15:49:30.840839 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:49:30 crc kubenswrapper[4878]: I1204 15:49:30.840984 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.960708 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7"] Dec 04 15:49:39 crc kubenswrapper[4878]: E1204 15:49:39.961800 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="extract-utilities" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.961820 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="extract-utilities" Dec 04 15:49:39 crc kubenswrapper[4878]: E1204 15:49:39.961837 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="registry-server" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.961847 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="registry-server" Dec 04 15:49:39 crc kubenswrapper[4878]: E1204 15:49:39.961888 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="extract-content" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.961898 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="extract-content" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.962082 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf3677-183c-4081-8750-41633a9b0e00" containerName="registry-server" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.962837 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.966622 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 04 15:49:39 crc kubenswrapper[4878]: I1204 15:49:39.977014 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7"] Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.017840 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.019153 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x77x\" (UniqueName: \"kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.019340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.121184 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.121255 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x77x\" (UniqueName: \"kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.121301 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.121893 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.121917 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.142054 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x77x\" (UniqueName: \"kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.279428 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.695097 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7"] Dec 04 15:49:40 crc kubenswrapper[4878]: I1204 15:49:40.819362 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" event={"ID":"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a","Type":"ContainerStarted","Data":"334ee17aab2461aa9022ede6d8e9633962c71e101f863f72693919baeee70f8b"} Dec 04 15:49:42 crc kubenswrapper[4878]: I1204 15:49:42.833967 4878 generic.go:334] "Generic (PLEG): container finished" podID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerID="2bbd50ba4d31cbdb9c35001a7bb2011d361ae6178de92435180e50b070a1dcb1" exitCode=0 Dec 04 15:49:42 crc kubenswrapper[4878]: I1204 15:49:42.834499 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" event={"ID":"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a","Type":"ContainerDied","Data":"2bbd50ba4d31cbdb9c35001a7bb2011d361ae6178de92435180e50b070a1dcb1"} Dec 04 15:49:43 crc kubenswrapper[4878]: I1204 15:49:43.800301 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4x82r" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" containerID="cri-o://55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc" gracePeriod=15 Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.166215 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4x82r_988eba95-b990-4f5a-ad25-e4129a8849d1/console/0.log" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.166590 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181624 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181679 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181704 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181756 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7qtl\" (UniqueName: \"kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181773 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181836 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.181853 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config\") pod \"988eba95-b990-4f5a-ad25-e4129a8849d1\" (UID: \"988eba95-b990-4f5a-ad25-e4129a8849d1\") " Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.182700 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.182722 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.183183 4878 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.183970 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca" (OuterVolumeSpecName: "service-ca") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.184134 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config" (OuterVolumeSpecName: "console-config") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.188072 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.188142 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.190622 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl" (OuterVolumeSpecName: "kube-api-access-t7qtl") pod "988eba95-b990-4f5a-ad25-e4129a8849d1" (UID: "988eba95-b990-4f5a-ad25-e4129a8849d1"). InnerVolumeSpecName "kube-api-access-t7qtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284119 4878 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-service-ca\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284325 4878 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284410 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7qtl\" (UniqueName: \"kubernetes.io/projected/988eba95-b990-4f5a-ad25-e4129a8849d1-kube-api-access-t7qtl\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284464 4878 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284516 4878 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/988eba95-b990-4f5a-ad25-e4129a8849d1-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.284573 4878 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/988eba95-b990-4f5a-ad25-e4129a8849d1-console-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.847691 4878 generic.go:334] "Generic (PLEG): container finished" podID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerID="c32a2e3c370c31caf5284d8a87924cf65800512c3dca151f71d9708fe7670b6b" exitCode=0 Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.847778 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" event={"ID":"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a","Type":"ContainerDied","Data":"c32a2e3c370c31caf5284d8a87924cf65800512c3dca151f71d9708fe7670b6b"} Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851143 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4x82r_988eba95-b990-4f5a-ad25-e4129a8849d1/console/0.log" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851203 4878 generic.go:334] "Generic (PLEG): container finished" podID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerID="55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc" exitCode=2 Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851243 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4x82r" event={"ID":"988eba95-b990-4f5a-ad25-e4129a8849d1","Type":"ContainerDied","Data":"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc"} Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851275 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4x82r" event={"ID":"988eba95-b990-4f5a-ad25-e4129a8849d1","Type":"ContainerDied","Data":"d1ff95bfcab48fdded180d5a66503fe461ee33d1eadd443944827cb9eadd4891"} Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851296 4878 scope.go:117] "RemoveContainer" containerID="55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.851429 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4x82r" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.882447 4878 scope.go:117] "RemoveContainer" containerID="55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc" Dec 04 15:49:44 crc kubenswrapper[4878]: E1204 15:49:44.882855 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc\": container with ID starting with 55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc not found: ID does not exist" containerID="55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.882921 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc"} err="failed to get container status \"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc\": rpc error: code = NotFound desc = could not find container \"55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc\": container with ID starting with 55be79217f70337d76f7869c3e8b9c26176ff65f6835ebdc7fc86bc8573a6dfc not found: ID does not exist" Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.886852 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:49:44 crc kubenswrapper[4878]: I1204 15:49:44.890304 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4x82r"] Dec 04 15:49:45 crc kubenswrapper[4878]: I1204 15:49:45.188021 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" path="/var/lib/kubelet/pods/988eba95-b990-4f5a-ad25-e4129a8849d1/volumes" Dec 04 15:49:45 crc kubenswrapper[4878]: I1204 15:49:45.865069 4878 generic.go:334] "Generic (PLEG): container finished" podID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerID="23c34840e2975897b96ee38e9cf4924bae4f0a238b6c3ec78f524b298cbe1fc6" exitCode=0 Dec 04 15:49:45 crc kubenswrapper[4878]: I1204 15:49:45.865165 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" event={"ID":"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a","Type":"ContainerDied","Data":"23c34840e2975897b96ee38e9cf4924bae4f0a238b6c3ec78f524b298cbe1fc6"} Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.116374 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.222894 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8x77x\" (UniqueName: \"kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x\") pod \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.223131 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle\") pod \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.223246 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util\") pod \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\" (UID: \"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a\") " Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.224438 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle" (OuterVolumeSpecName: "bundle") pod "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" (UID: "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.228745 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x" (OuterVolumeSpecName: "kube-api-access-8x77x") pod "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" (UID: "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a"). InnerVolumeSpecName "kube-api-access-8x77x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.238465 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util" (OuterVolumeSpecName: "util") pod "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" (UID: "1ec855d4-1744-41cf-b49d-0a75a9a3cd2a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.325332 4878 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.325581 4878 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-util\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.325647 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8x77x\" (UniqueName: \"kubernetes.io/projected/1ec855d4-1744-41cf-b49d-0a75a9a3cd2a-kube-api-access-8x77x\") on node \"crc\" DevicePath \"\"" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.885052 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" event={"ID":"1ec855d4-1744-41cf-b49d-0a75a9a3cd2a","Type":"ContainerDied","Data":"334ee17aab2461aa9022ede6d8e9633962c71e101f863f72693919baeee70f8b"} Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.885096 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334ee17aab2461aa9022ede6d8e9633962c71e101f863f72693919baeee70f8b" Dec 04 15:49:47 crc kubenswrapper[4878]: I1204 15:49:47.885160 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.986702 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b"] Dec 04 15:49:57 crc kubenswrapper[4878]: E1204 15:49:57.987885 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="extract" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.987905 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="extract" Dec 04 15:49:57 crc kubenswrapper[4878]: E1204 15:49:57.987918 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="pull" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.987925 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="pull" Dec 04 15:49:57 crc kubenswrapper[4878]: E1204 15:49:57.987957 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="util" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.987966 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="util" Dec 04 15:49:57 crc kubenswrapper[4878]: E1204 15:49:57.987978 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.987986 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.988119 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="988eba95-b990-4f5a-ad25-e4129a8849d1" containerName="console" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.988137 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec855d4-1744-41cf-b49d-0a75a9a3cd2a" containerName="extract" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.988690 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.996467 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.996753 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.997091 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.997329 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 04 15:49:57 crc kubenswrapper[4878]: I1204 15:49:57.997497 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-wlvc2" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.012807 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b"] Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.072173 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-webhook-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.072244 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.072306 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrnqw\" (UniqueName: \"kubernetes.io/projected/2236b740-707c-4652-994a-3b5289a54cf1-kube-api-access-nrnqw\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.174067 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrnqw\" (UniqueName: \"kubernetes.io/projected/2236b740-707c-4652-994a-3b5289a54cf1-kube-api-access-nrnqw\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.174168 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-webhook-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.174210 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.186274 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-apiservice-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.186292 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2236b740-707c-4652-994a-3b5289a54cf1-webhook-cert\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.212565 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrnqw\" (UniqueName: \"kubernetes.io/projected/2236b740-707c-4652-994a-3b5289a54cf1-kube-api-access-nrnqw\") pod \"metallb-operator-controller-manager-5fb6fc4594-rpm6b\" (UID: \"2236b740-707c-4652-994a-3b5289a54cf1\") " pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.308791 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.348141 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw"] Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.348981 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.351613 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.351799 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-82wpz" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.357474 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.370652 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw"] Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.385836 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-webhook-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.385908 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-apiservice-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.385940 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvmf\" (UniqueName: \"kubernetes.io/projected/db6409af-e753-47ac-8370-71aedbe7208d-kube-api-access-pzvmf\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.486646 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-webhook-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.486986 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-apiservice-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.487014 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvmf\" (UniqueName: \"kubernetes.io/projected/db6409af-e753-47ac-8370-71aedbe7208d-kube-api-access-pzvmf\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.491826 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-webhook-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.493939 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/db6409af-e753-47ac-8370-71aedbe7208d-apiservice-cert\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.512455 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvmf\" (UniqueName: \"kubernetes.io/projected/db6409af-e753-47ac-8370-71aedbe7208d-kube-api-access-pzvmf\") pod \"metallb-operator-webhook-server-759dcc4c7f-mgcfw\" (UID: \"db6409af-e753-47ac-8370-71aedbe7208d\") " pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.695222 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.744963 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b"] Dec 04 15:49:58 crc kubenswrapper[4878]: W1204 15:49:58.755316 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2236b740_707c_4652_994a_3b5289a54cf1.slice/crio-2c5dc228940453dba26c3584036dede37ec59b57df178ee21521a1c9951192ed WatchSource:0}: Error finding container 2c5dc228940453dba26c3584036dede37ec59b57df178ee21521a1c9951192ed: Status 404 returned error can't find the container with id 2c5dc228940453dba26c3584036dede37ec59b57df178ee21521a1c9951192ed Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.948128 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" event={"ID":"2236b740-707c-4652-994a-3b5289a54cf1","Type":"ContainerStarted","Data":"2c5dc228940453dba26c3584036dede37ec59b57df178ee21521a1c9951192ed"} Dec 04 15:49:58 crc kubenswrapper[4878]: I1204 15:49:58.961641 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw"] Dec 04 15:49:58 crc kubenswrapper[4878]: W1204 15:49:58.961640 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb6409af_e753_47ac_8370_71aedbe7208d.slice/crio-59ba987d3106a3ad14a7c96783e5c8397d65ce83febec42e845956a02cb1ceed WatchSource:0}: Error finding container 59ba987d3106a3ad14a7c96783e5c8397d65ce83febec42e845956a02cb1ceed: Status 404 returned error can't find the container with id 59ba987d3106a3ad14a7c96783e5c8397d65ce83febec42e845956a02cb1ceed Dec 04 15:49:59 crc kubenswrapper[4878]: I1204 15:49:59.954234 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" event={"ID":"db6409af-e753-47ac-8370-71aedbe7208d","Type":"ContainerStarted","Data":"59ba987d3106a3ad14a7c96783e5c8397d65ce83febec42e845956a02cb1ceed"} Dec 04 15:50:00 crc kubenswrapper[4878]: I1204 15:50:00.846481 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:50:00 crc kubenswrapper[4878]: I1204 15:50:00.846924 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:50:00 crc kubenswrapper[4878]: I1204 15:50:00.846995 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:50:00 crc kubenswrapper[4878]: I1204 15:50:00.847860 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:50:00 crc kubenswrapper[4878]: I1204 15:50:00.848006 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca" gracePeriod=600 Dec 04 15:50:01 crc kubenswrapper[4878]: I1204 15:50:01.970218 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca" exitCode=0 Dec 04 15:50:01 crc kubenswrapper[4878]: I1204 15:50:01.970284 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca"} Dec 04 15:50:01 crc kubenswrapper[4878]: I1204 15:50:01.970580 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e"} Dec 04 15:50:01 crc kubenswrapper[4878]: I1204 15:50:01.970603 4878 scope.go:117] "RemoveContainer" containerID="873271c54ba316ad167ad25cdfeb8504d7bc2f0b6a256c6664efd047b51ceff5" Dec 04 15:50:02 crc kubenswrapper[4878]: I1204 15:50:02.977241 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" event={"ID":"2236b740-707c-4652-994a-3b5289a54cf1","Type":"ContainerStarted","Data":"aad17deaa9ee9105f69b4230a673d921e27756090130ad7134afbfbc4abd696e"} Dec 04 15:50:02 crc kubenswrapper[4878]: I1204 15:50:02.977633 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:50:02 crc kubenswrapper[4878]: I1204 15:50:02.997738 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" podStartSLOduration=2.523938722 podStartE2EDuration="5.997708201s" podCreationTimestamp="2025-12-04 15:49:57 +0000 UTC" firstStartedPulling="2025-12-04 15:49:58.761906203 +0000 UTC m=+842.724443159" lastFinishedPulling="2025-12-04 15:50:02.235675682 +0000 UTC m=+846.198212638" observedRunningTime="2025-12-04 15:50:02.997142067 +0000 UTC m=+846.959679033" watchObservedRunningTime="2025-12-04 15:50:02.997708201 +0000 UTC m=+846.960245157" Dec 04 15:50:11 crc kubenswrapper[4878]: I1204 15:50:11.356540 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" event={"ID":"db6409af-e753-47ac-8370-71aedbe7208d","Type":"ContainerStarted","Data":"1fc51a74041447f9a53347e344797c0749f5a0a0e034c92b2525686f1ec8c62e"} Dec 04 15:50:11 crc kubenswrapper[4878]: I1204 15:50:11.357066 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:50:11 crc kubenswrapper[4878]: I1204 15:50:11.375806 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" podStartSLOduration=1.4753128549999999 podStartE2EDuration="13.375782976s" podCreationTimestamp="2025-12-04 15:49:58 +0000 UTC" firstStartedPulling="2025-12-04 15:49:58.964818562 +0000 UTC m=+842.927355518" lastFinishedPulling="2025-12-04 15:50:10.865288683 +0000 UTC m=+854.827825639" observedRunningTime="2025-12-04 15:50:11.372155754 +0000 UTC m=+855.334692720" watchObservedRunningTime="2025-12-04 15:50:11.375782976 +0000 UTC m=+855.338319932" Dec 04 15:50:28 crc kubenswrapper[4878]: I1204 15:50:28.701918 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-759dcc4c7f-mgcfw" Dec 04 15:50:38 crc kubenswrapper[4878]: I1204 15:50:38.316910 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5fb6fc4594-rpm6b" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.256602 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.257596 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.261895 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.266285 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-flv9m" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.268499 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9tt4z"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.270970 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.275339 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.275651 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.281179 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350007 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12abc8b-282b-472f-9bc9-b00c63c1d45c-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350070 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slzjd\" (UniqueName: \"kubernetes.io/projected/8338b9f2-c79a-4232-b705-b3a21426ade6-kube-api-access-slzjd\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350103 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-startup\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350127 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-reloader\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350146 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350162 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics-certs\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350207 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-sockets\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350228 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-492hc\" (UniqueName: \"kubernetes.io/projected/f12abc8b-282b-472f-9bc9-b00c63c1d45c-kube-api-access-492hc\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.350250 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-conf\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.379866 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-cwrcm"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.380767 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.386447 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.386453 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.386460 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.386584 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-9nr2l" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.417292 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-jrsl2"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.418291 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.421429 4878 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.437116 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jrsl2"] Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451225 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12abc8b-282b-472f-9bc9-b00c63c1d45c-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451298 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slzjd\" (UniqueName: \"kubernetes.io/projected/8338b9f2-c79a-4232-b705-b3a21426ade6-kube-api-access-slzjd\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451336 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-startup\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451912 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-reloader\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451950 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.451975 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics-certs\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452024 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-sockets\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452053 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-492hc\" (UniqueName: \"kubernetes.io/projected/f12abc8b-282b-472f-9bc9-b00c63c1d45c-kube-api-access-492hc\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452082 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-conf\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452537 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-reloader\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452594 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-conf\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452786 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452808 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-startup\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.452846 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/8338b9f2-c79a-4232-b705-b3a21426ade6-frr-sockets\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.460740 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8338b9f2-c79a-4232-b705-b3a21426ade6-metrics-certs\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.469662 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f12abc8b-282b-472f-9bc9-b00c63c1d45c-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.491788 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slzjd\" (UniqueName: \"kubernetes.io/projected/8338b9f2-c79a-4232-b705-b3a21426ade6-kube-api-access-slzjd\") pod \"frr-k8s-9tt4z\" (UID: \"8338b9f2-c79a-4232-b705-b3a21426ade6\") " pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.498620 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-492hc\" (UniqueName: \"kubernetes.io/projected/f12abc8b-282b-472f-9bc9-b00c63c1d45c-kube-api-access-492hc\") pod \"frr-k8s-webhook-server-7fcb986d4-xh265\" (UID: \"f12abc8b-282b-472f-9bc9-b00c63c1d45c\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.553992 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554097 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554136 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmb9m\" (UniqueName: \"kubernetes.io/projected/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-kube-api-access-lmb9m\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554211 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/293b50a4-7270-4560-bb54-ad9394acbf8d-metallb-excludel2\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554275 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-cert\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554429 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.554496 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fklx8\" (UniqueName: \"kubernetes.io/projected/293b50a4-7270-4560-bb54-ad9394acbf8d-kube-api-access-fklx8\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.574351 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.586995 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.655497 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.655546 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmb9m\" (UniqueName: \"kubernetes.io/projected/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-kube-api-access-lmb9m\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.656238 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/293b50a4-7270-4560-bb54-ad9394acbf8d-metallb-excludel2\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.656360 4878 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.656417 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs podName:b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f nodeName:}" failed. No retries permitted until 2025-12-04 15:50:40.156400708 +0000 UTC m=+884.118937664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs") pod "controller-f8648f98b-jrsl2" (UID: "b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f") : secret "controller-certs-secret" not found Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.659836 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/293b50a4-7270-4560-bb54-ad9394acbf8d-metallb-excludel2\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.659930 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-cert\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.659980 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.660013 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fklx8\" (UniqueName: \"kubernetes.io/projected/293b50a4-7270-4560-bb54-ad9394acbf8d-kube-api-access-fklx8\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.660057 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.660230 4878 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.660275 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist podName:293b50a4-7270-4560-bb54-ad9394acbf8d nodeName:}" failed. No retries permitted until 2025-12-04 15:50:40.160260386 +0000 UTC m=+884.122797332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist") pod "speaker-cwrcm" (UID: "293b50a4-7270-4560-bb54-ad9394acbf8d") : secret "metallb-memberlist" not found Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.663143 4878 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 04 15:50:39 crc kubenswrapper[4878]: E1204 15:50:39.663266 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs podName:293b50a4-7270-4560-bb54-ad9394acbf8d nodeName:}" failed. No retries permitted until 2025-12-04 15:50:40.163235372 +0000 UTC m=+884.125772328 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs") pod "speaker-cwrcm" (UID: "293b50a4-7270-4560-bb54-ad9394acbf8d") : secret "speaker-certs-secret" not found Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.665130 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-cert\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.693315 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fklx8\" (UniqueName: \"kubernetes.io/projected/293b50a4-7270-4560-bb54-ad9394acbf8d-kube-api-access-fklx8\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.693951 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmb9m\" (UniqueName: \"kubernetes.io/projected/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-kube-api-access-lmb9m\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:39 crc kubenswrapper[4878]: I1204 15:50:39.878491 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265"] Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.169973 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.170046 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.170083 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:40 crc kubenswrapper[4878]: E1204 15:50:40.170361 4878 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 04 15:50:40 crc kubenswrapper[4878]: E1204 15:50:40.170479 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist podName:293b50a4-7270-4560-bb54-ad9394acbf8d nodeName:}" failed. No retries permitted until 2025-12-04 15:50:41.170448232 +0000 UTC m=+885.132985248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist") pod "speaker-cwrcm" (UID: "293b50a4-7270-4560-bb54-ad9394acbf8d") : secret "metallb-memberlist" not found Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.174434 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-metrics-certs\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.174451 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f-metrics-certs\") pod \"controller-f8648f98b-jrsl2\" (UID: \"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f\") " pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.342553 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.528003 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" event={"ID":"f12abc8b-282b-472f-9bc9-b00c63c1d45c","Type":"ContainerStarted","Data":"7263cec0e02ef24c3ed12d3e9b08d5d1d3c0d31c43c90e4888d479d80c91ba1a"} Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.529079 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"f3db8359b76353459d0d12466e501f7dcdd380ae047f6b736444c2099eaac836"} Dec 04 15:50:40 crc kubenswrapper[4878]: I1204 15:50:40.674001 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jrsl2"] Dec 04 15:50:40 crc kubenswrapper[4878]: W1204 15:50:40.693124 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2b693c8_a1e5_4c7c_b7a5_4b4cce0cfd0f.slice/crio-dc53e10c0a8d16e95ffb689fc6c7329e168b5c2de0e4ba47570ca2b3a4382b31 WatchSource:0}: Error finding container dc53e10c0a8d16e95ffb689fc6c7329e168b5c2de0e4ba47570ca2b3a4382b31: Status 404 returned error can't find the container with id dc53e10c0a8d16e95ffb689fc6c7329e168b5c2de0e4ba47570ca2b3a4382b31 Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.200855 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.205455 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/293b50a4-7270-4560-bb54-ad9394acbf8d-memberlist\") pod \"speaker-cwrcm\" (UID: \"293b50a4-7270-4560-bb54-ad9394acbf8d\") " pod="metallb-system/speaker-cwrcm" Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.500149 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-cwrcm" Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.541545 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jrsl2" event={"ID":"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f","Type":"ContainerStarted","Data":"5ce6fbc6e376311abcaf28f48deb1cf608170dd28b310d6b3b88159bbbadef62"} Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.541597 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jrsl2" event={"ID":"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f","Type":"ContainerStarted","Data":"756507ec35cb226f99c0405c4da25c87fd41ab0b329628e7bcd0b2de846f8146"} Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.541609 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jrsl2" event={"ID":"b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f","Type":"ContainerStarted","Data":"dc53e10c0a8d16e95ffb689fc6c7329e168b5c2de0e4ba47570ca2b3a4382b31"} Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.541747 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:41 crc kubenswrapper[4878]: I1204 15:50:41.580042 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-jrsl2" podStartSLOduration=2.580014455 podStartE2EDuration="2.580014455s" podCreationTimestamp="2025-12-04 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:50:41.572595235 +0000 UTC m=+885.535132201" watchObservedRunningTime="2025-12-04 15:50:41.580014455 +0000 UTC m=+885.542551401" Dec 04 15:50:42 crc kubenswrapper[4878]: I1204 15:50:42.552449 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwrcm" event={"ID":"293b50a4-7270-4560-bb54-ad9394acbf8d","Type":"ContainerStarted","Data":"fee2bf223aad081f37637519c391c5d1350640121f76ef331942dce6f340e090"} Dec 04 15:50:42 crc kubenswrapper[4878]: I1204 15:50:42.552829 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwrcm" event={"ID":"293b50a4-7270-4560-bb54-ad9394acbf8d","Type":"ContainerStarted","Data":"e266c14cd8d8caf8a23770b6ec94157043c0c826a29f59154ac2483a5102da4f"} Dec 04 15:50:43 crc kubenswrapper[4878]: I1204 15:50:43.625693 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-cwrcm" event={"ID":"293b50a4-7270-4560-bb54-ad9394acbf8d","Type":"ContainerStarted","Data":"5082bae4bf9cd2c5d008433d04138f03ea16d27a9cb69af0821390fe9d9fda55"} Dec 04 15:50:43 crc kubenswrapper[4878]: I1204 15:50:43.626022 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-cwrcm" Dec 04 15:50:43 crc kubenswrapper[4878]: I1204 15:50:43.760323 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-cwrcm" podStartSLOduration=4.76029402 podStartE2EDuration="4.76029402s" podCreationTimestamp="2025-12-04 15:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:50:43.754926052 +0000 UTC m=+887.717463018" watchObservedRunningTime="2025-12-04 15:50:43.76029402 +0000 UTC m=+887.722830986" Dec 04 15:50:50 crc kubenswrapper[4878]: I1204 15:50:50.348074 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-jrsl2" Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.506000 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-cwrcm" Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.808120 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" event={"ID":"f12abc8b-282b-472f-9bc9-b00c63c1d45c","Type":"ContainerStarted","Data":"5c13abb1a150f65cca31c22e24cb48f0314b81e51f106a40453ce63c79404d5b"} Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.808474 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.810513 4878 generic.go:334] "Generic (PLEG): container finished" podID="8338b9f2-c79a-4232-b705-b3a21426ade6" containerID="dec55146864818acd5c5f9cfa07fe793efdd457eb584e67b3ece4bcb262d7c08" exitCode=0 Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.810566 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerDied","Data":"dec55146864818acd5c5f9cfa07fe793efdd457eb584e67b3ece4bcb262d7c08"} Dec 04 15:50:51 crc kubenswrapper[4878]: I1204 15:50:51.830059 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" podStartSLOduration=1.120236378 podStartE2EDuration="12.830030935s" podCreationTimestamp="2025-12-04 15:50:39 +0000 UTC" firstStartedPulling="2025-12-04 15:50:39.883000247 +0000 UTC m=+883.845537203" lastFinishedPulling="2025-12-04 15:50:51.592794804 +0000 UTC m=+895.555331760" observedRunningTime="2025-12-04 15:50:51.828602329 +0000 UTC m=+895.791139295" watchObservedRunningTime="2025-12-04 15:50:51.830030935 +0000 UTC m=+895.792567891" Dec 04 15:50:52 crc kubenswrapper[4878]: I1204 15:50:52.818893 4878 generic.go:334] "Generic (PLEG): container finished" podID="8338b9f2-c79a-4232-b705-b3a21426ade6" containerID="abd13b2f28cf25328dc4a5da25ad2c33bfa89c69b713d43a4c416edb85d1bc78" exitCode=0 Dec 04 15:50:52 crc kubenswrapper[4878]: I1204 15:50:52.818968 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerDied","Data":"abd13b2f28cf25328dc4a5da25ad2c33bfa89c69b713d43a4c416edb85d1bc78"} Dec 04 15:50:53 crc kubenswrapper[4878]: I1204 15:50:53.826467 4878 generic.go:334] "Generic (PLEG): container finished" podID="8338b9f2-c79a-4232-b705-b3a21426ade6" containerID="a729ea6b853a5f7259782f448e87153122b1f4cec662721a3c003ffee1e563f3" exitCode=0 Dec 04 15:50:53 crc kubenswrapper[4878]: I1204 15:50:53.826516 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerDied","Data":"a729ea6b853a5f7259782f448e87153122b1f4cec662721a3c003ffee1e563f3"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.544086 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.545347 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.550905 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.551407 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.551661 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-dsmrz" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.569169 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.574767 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvj7m\" (UniqueName: \"kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m\") pod \"openstack-operator-index-cmm6l\" (UID: \"6917592d-5ae7-4180-8f91-a268b9948c13\") " pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.676543 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvj7m\" (UniqueName: \"kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m\") pod \"openstack-operator-index-cmm6l\" (UID: \"6917592d-5ae7-4180-8f91-a268b9948c13\") " pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.706204 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvj7m\" (UniqueName: \"kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m\") pod \"openstack-operator-index-cmm6l\" (UID: \"6917592d-5ae7-4180-8f91-a268b9948c13\") " pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.843327 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"efed5354503d1f51bb4be0297c319664c89e9657ced1f3aef6aae26ce1f3260a"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.844337 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"00f798c5c043b8fdd8ada49f6910ed9fb2ae257fa8b1d8177b711f382383f8c8"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.844390 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"73fabdc4aae6076559deb8912c6e888c74eb94aa7520b9a7e97ce48c3dd52d10"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.844403 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"743ef948269f3907e0ed6753b4217ee214839db000bc4d1e65926c5b9470627c"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.844416 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"9d61aae27d307ac4adf73582683f087b7839dbdf91de20490672e94d65343ca8"} Dec 04 15:50:54 crc kubenswrapper[4878]: I1204 15:50:54.879262 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:50:55 crc kubenswrapper[4878]: I1204 15:50:55.408616 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:50:55 crc kubenswrapper[4878]: W1204 15:50:55.414336 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6917592d_5ae7_4180_8f91_a268b9948c13.slice/crio-ac9afa1798f021d37c957fb27e4ca75331001bf26c9676ecd62901b3ac271488 WatchSource:0}: Error finding container ac9afa1798f021d37c957fb27e4ca75331001bf26c9676ecd62901b3ac271488: Status 404 returned error can't find the container with id ac9afa1798f021d37c957fb27e4ca75331001bf26c9676ecd62901b3ac271488 Dec 04 15:50:55 crc kubenswrapper[4878]: I1204 15:50:55.853227 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmm6l" event={"ID":"6917592d-5ae7-4180-8f91-a268b9948c13","Type":"ContainerStarted","Data":"ac9afa1798f021d37c957fb27e4ca75331001bf26c9676ecd62901b3ac271488"} Dec 04 15:50:56 crc kubenswrapper[4878]: I1204 15:50:56.869670 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tt4z" event={"ID":"8338b9f2-c79a-4232-b705-b3a21426ade6","Type":"ContainerStarted","Data":"a6f4667df32cc6216f9238c2da79255358d414d3d196831f1423451aab3a31ca"} Dec 04 15:50:56 crc kubenswrapper[4878]: I1204 15:50:56.870304 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:56 crc kubenswrapper[4878]: I1204 15:50:56.900262 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9tt4z" podStartSLOduration=6.003827222 podStartE2EDuration="17.900237616s" podCreationTimestamp="2025-12-04 15:50:39 +0000 UTC" firstStartedPulling="2025-12-04 15:50:39.713651901 +0000 UTC m=+883.676188857" lastFinishedPulling="2025-12-04 15:50:51.610062295 +0000 UTC m=+895.572599251" observedRunningTime="2025-12-04 15:50:56.895560946 +0000 UTC m=+900.858097922" watchObservedRunningTime="2025-12-04 15:50:56.900237616 +0000 UTC m=+900.862774572" Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.109746 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.715370 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6fzqf"] Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.716704 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.723741 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6fzqf"] Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.768070 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6c2g\" (UniqueName: \"kubernetes.io/projected/d52859c7-fd58-4cf6-af6f-a387abd1ea3a-kube-api-access-l6c2g\") pod \"openstack-operator-index-6fzqf\" (UID: \"d52859c7-fd58-4cf6-af6f-a387abd1ea3a\") " pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.874795 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6c2g\" (UniqueName: \"kubernetes.io/projected/d52859c7-fd58-4cf6-af6f-a387abd1ea3a-kube-api-access-l6c2g\") pod \"openstack-operator-index-6fzqf\" (UID: \"d52859c7-fd58-4cf6-af6f-a387abd1ea3a\") " pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:50:58 crc kubenswrapper[4878]: I1204 15:50:58.898106 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6c2g\" (UniqueName: \"kubernetes.io/projected/d52859c7-fd58-4cf6-af6f-a387abd1ea3a-kube-api-access-l6c2g\") pod \"openstack-operator-index-6fzqf\" (UID: \"d52859c7-fd58-4cf6-af6f-a387abd1ea3a\") " pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.085689 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.442419 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6fzqf"] Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.587850 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.624787 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.892734 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6fzqf" event={"ID":"d52859c7-fd58-4cf6-af6f-a387abd1ea3a","Type":"ContainerStarted","Data":"b28bbdc29b0f42c404b698e546d8023cfbd9febebe8b4249bfba39dd56c6f348"} Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.892798 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6fzqf" event={"ID":"d52859c7-fd58-4cf6-af6f-a387abd1ea3a","Type":"ContainerStarted","Data":"a7ac7dfb3c464bc0367ffc6efe78a6190a05f1a005f0dfa4b2cbaf6693757497"} Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.895366 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cmm6l" podUID="6917592d-5ae7-4180-8f91-a268b9948c13" containerName="registry-server" containerID="cri-o://d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf" gracePeriod=2 Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.895677 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmm6l" event={"ID":"6917592d-5ae7-4180-8f91-a268b9948c13","Type":"ContainerStarted","Data":"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf"} Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.917053 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6fzqf" podStartSLOduration=1.845396983 podStartE2EDuration="1.917030943s" podCreationTimestamp="2025-12-04 15:50:58 +0000 UTC" firstStartedPulling="2025-12-04 15:50:59.452896585 +0000 UTC m=+903.415433541" lastFinishedPulling="2025-12-04 15:50:59.524530545 +0000 UTC m=+903.487067501" observedRunningTime="2025-12-04 15:50:59.914410206 +0000 UTC m=+903.876947172" watchObservedRunningTime="2025-12-04 15:50:59.917030943 +0000 UTC m=+903.879567899" Dec 04 15:50:59 crc kubenswrapper[4878]: I1204 15:50:59.934403 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cmm6l" podStartSLOduration=2.5108152759999998 podStartE2EDuration="5.934379816s" podCreationTimestamp="2025-12-04 15:50:54 +0000 UTC" firstStartedPulling="2025-12-04 15:50:55.417513323 +0000 UTC m=+899.380050279" lastFinishedPulling="2025-12-04 15:50:58.841077863 +0000 UTC m=+902.803614819" observedRunningTime="2025-12-04 15:50:59.927910291 +0000 UTC m=+903.890447257" watchObservedRunningTime="2025-12-04 15:50:59.934379816 +0000 UTC m=+903.896916772" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.266509 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.411276 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvj7m\" (UniqueName: \"kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m\") pod \"6917592d-5ae7-4180-8f91-a268b9948c13\" (UID: \"6917592d-5ae7-4180-8f91-a268b9948c13\") " Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.541470 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m" (OuterVolumeSpecName: "kube-api-access-dvj7m") pod "6917592d-5ae7-4180-8f91-a268b9948c13" (UID: "6917592d-5ae7-4180-8f91-a268b9948c13"). InnerVolumeSpecName "kube-api-access-dvj7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.615194 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvj7m\" (UniqueName: \"kubernetes.io/projected/6917592d-5ae7-4180-8f91-a268b9948c13-kube-api-access-dvj7m\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.904568 4878 generic.go:334] "Generic (PLEG): container finished" podID="6917592d-5ae7-4180-8f91-a268b9948c13" containerID="d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf" exitCode=0 Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.904696 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmm6l" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.904687 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmm6l" event={"ID":"6917592d-5ae7-4180-8f91-a268b9948c13","Type":"ContainerDied","Data":"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf"} Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.904843 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmm6l" event={"ID":"6917592d-5ae7-4180-8f91-a268b9948c13","Type":"ContainerDied","Data":"ac9afa1798f021d37c957fb27e4ca75331001bf26c9676ecd62901b3ac271488"} Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.904946 4878 scope.go:117] "RemoveContainer" containerID="d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.930082 4878 scope.go:117] "RemoveContainer" containerID="d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf" Dec 04 15:51:00 crc kubenswrapper[4878]: E1204 15:51:00.938256 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf\": container with ID starting with d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf not found: ID does not exist" containerID="d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.939931 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf"} err="failed to get container status \"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf\": rpc error: code = NotFound desc = could not find container \"d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf\": container with ID starting with d0323dacde8e6c0d598f310e0722f61296e87cc3f762765e33ea2ef0d7d51fbf not found: ID does not exist" Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.945636 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:51:00 crc kubenswrapper[4878]: I1204 15:51:00.947604 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cmm6l"] Dec 04 15:51:01 crc kubenswrapper[4878]: I1204 15:51:01.187504 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6917592d-5ae7-4180-8f91-a268b9948c13" path="/var/lib/kubelet/pods/6917592d-5ae7-4180-8f91-a268b9948c13/volumes" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.086372 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.087356 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.123669 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.581259 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-xh265" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.591824 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9tt4z" Dec 04 15:51:09 crc kubenswrapper[4878]: I1204 15:51:09.984258 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6fzqf" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.656625 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7"] Dec 04 15:51:16 crc kubenswrapper[4878]: E1204 15:51:16.657949 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6917592d-5ae7-4180-8f91-a268b9948c13" containerName="registry-server" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.657969 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="6917592d-5ae7-4180-8f91-a268b9948c13" containerName="registry-server" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.658112 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="6917592d-5ae7-4180-8f91-a268b9948c13" containerName="registry-server" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.659082 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.662303 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sbjvl" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.673108 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7"] Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.853389 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffnl8\" (UniqueName: \"kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.853445 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.853479 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.955134 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.955251 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffnl8\" (UniqueName: \"kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.955283 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.955654 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.955714 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:16 crc kubenswrapper[4878]: I1204 15:51:16.979583 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffnl8\" (UniqueName: \"kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8\") pod \"208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:17 crc kubenswrapper[4878]: I1204 15:51:17.278240 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:17 crc kubenswrapper[4878]: I1204 15:51:17.954839 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7"] Dec 04 15:51:17 crc kubenswrapper[4878]: W1204 15:51:17.958386 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9786a318_91d9_49a6_9123_fa844e894ecc.slice/crio-d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063 WatchSource:0}: Error finding container d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063: Status 404 returned error can't find the container with id d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063 Dec 04 15:51:18 crc kubenswrapper[4878]: I1204 15:51:18.009239 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" event={"ID":"9786a318-91d9-49a6-9123-fa844e894ecc","Type":"ContainerStarted","Data":"d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063"} Dec 04 15:51:19 crc kubenswrapper[4878]: I1204 15:51:19.017177 4878 generic.go:334] "Generic (PLEG): container finished" podID="9786a318-91d9-49a6-9123-fa844e894ecc" containerID="710d8325932d6550b07229b66d2d251e5acceafbf449671ac849ebc6874a6132" exitCode=0 Dec 04 15:51:19 crc kubenswrapper[4878]: I1204 15:51:19.017290 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" event={"ID":"9786a318-91d9-49a6-9123-fa844e894ecc","Type":"ContainerDied","Data":"710d8325932d6550b07229b66d2d251e5acceafbf449671ac849ebc6874a6132"} Dec 04 15:51:20 crc kubenswrapper[4878]: I1204 15:51:20.027469 4878 generic.go:334] "Generic (PLEG): container finished" podID="9786a318-91d9-49a6-9123-fa844e894ecc" containerID="7d1e09d1a65c7cfac48965094c36704c8bc8897dc2c8b984e90e9dbfe7c38365" exitCode=0 Dec 04 15:51:20 crc kubenswrapper[4878]: I1204 15:51:20.027592 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" event={"ID":"9786a318-91d9-49a6-9123-fa844e894ecc","Type":"ContainerDied","Data":"7d1e09d1a65c7cfac48965094c36704c8bc8897dc2c8b984e90e9dbfe7c38365"} Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.049200 4878 generic.go:334] "Generic (PLEG): container finished" podID="9786a318-91d9-49a6-9123-fa844e894ecc" containerID="9ec15a665ac4499ab8bc91288517d794d722b85ee59854b769b67b5dec6cc1f2" exitCode=0 Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.049252 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" event={"ID":"9786a318-91d9-49a6-9123-fa844e894ecc","Type":"ContainerDied","Data":"9ec15a665ac4499ab8bc91288517d794d722b85ee59854b769b67b5dec6cc1f2"} Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.613972 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.615434 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.627380 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.642313 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.642365 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.642457 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6vx\" (UniqueName: \"kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.743746 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.743790 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.743832 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6vx\" (UniqueName: \"kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.744386 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.744434 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.763457 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6vx\" (UniqueName: \"kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx\") pod \"redhat-marketplace-7xl2h\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:21 crc kubenswrapper[4878]: I1204 15:51:21.935447 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.159786 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:22 crc kubenswrapper[4878]: W1204 15:51:22.169341 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08cee1a7_e31b_411e_b801_5983e4f516ad.slice/crio-25071fa7000a9ee287f3cc49776548d6c8d333701a2b13896179e54b53423606 WatchSource:0}: Error finding container 25071fa7000a9ee287f3cc49776548d6c8d333701a2b13896179e54b53423606: Status 404 returned error can't find the container with id 25071fa7000a9ee287f3cc49776548d6c8d333701a2b13896179e54b53423606 Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.294234 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.362734 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util\") pod \"9786a318-91d9-49a6-9123-fa844e894ecc\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.362848 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle\") pod \"9786a318-91d9-49a6-9123-fa844e894ecc\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.362895 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffnl8\" (UniqueName: \"kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8\") pod \"9786a318-91d9-49a6-9123-fa844e894ecc\" (UID: \"9786a318-91d9-49a6-9123-fa844e894ecc\") " Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.364456 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle" (OuterVolumeSpecName: "bundle") pod "9786a318-91d9-49a6-9123-fa844e894ecc" (UID: "9786a318-91d9-49a6-9123-fa844e894ecc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.367823 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8" (OuterVolumeSpecName: "kube-api-access-ffnl8") pod "9786a318-91d9-49a6-9123-fa844e894ecc" (UID: "9786a318-91d9-49a6-9123-fa844e894ecc"). InnerVolumeSpecName "kube-api-access-ffnl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.377072 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util" (OuterVolumeSpecName: "util") pod "9786a318-91d9-49a6-9123-fa844e894ecc" (UID: "9786a318-91d9-49a6-9123-fa844e894ecc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.464923 4878 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-util\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.464976 4878 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9786a318-91d9-49a6-9123-fa844e894ecc-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:22 crc kubenswrapper[4878]: I1204 15:51:22.464992 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffnl8\" (UniqueName: \"kubernetes.io/projected/9786a318-91d9-49a6-9123-fa844e894ecc-kube-api-access-ffnl8\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.087668 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.087648 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7" event={"ID":"9786a318-91d9-49a6-9123-fa844e894ecc","Type":"ContainerDied","Data":"d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063"} Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.087811 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d690a1c95d2a5186e40cc4cab92ff6b059415c478fe5928a234afccb221063" Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.090834 4878 generic.go:334] "Generic (PLEG): container finished" podID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerID="3f67ac67f27958c14dbd31c8440748f3627ab392e74761976693de214b15dfa7" exitCode=0 Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.090888 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerDied","Data":"3f67ac67f27958c14dbd31c8440748f3627ab392e74761976693de214b15dfa7"} Dec 04 15:51:23 crc kubenswrapper[4878]: I1204 15:51:23.090927 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerStarted","Data":"25071fa7000a9ee287f3cc49776548d6c8d333701a2b13896179e54b53423606"} Dec 04 15:51:25 crc kubenswrapper[4878]: I1204 15:51:25.108967 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerStarted","Data":"caa87c0cf11366ebb93ada120cf91edd35559290c8e3449299d6cbe4cd3392d6"} Dec 04 15:51:26 crc kubenswrapper[4878]: I1204 15:51:26.117613 4878 generic.go:334] "Generic (PLEG): container finished" podID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerID="caa87c0cf11366ebb93ada120cf91edd35559290c8e3449299d6cbe4cd3392d6" exitCode=0 Dec 04 15:51:26 crc kubenswrapper[4878]: I1204 15:51:26.117664 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerDied","Data":"caa87c0cf11366ebb93ada120cf91edd35559290c8e3449299d6cbe4cd3392d6"} Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.955665 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq"] Dec 04 15:51:27 crc kubenswrapper[4878]: E1204 15:51:27.956544 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="pull" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.956567 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="pull" Dec 04 15:51:27 crc kubenswrapper[4878]: E1204 15:51:27.956590 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="util" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.956598 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="util" Dec 04 15:51:27 crc kubenswrapper[4878]: E1204 15:51:27.956613 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="extract" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.956621 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="extract" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.956762 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="9786a318-91d9-49a6-9123-fa844e894ecc" containerName="extract" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.957301 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:27 crc kubenswrapper[4878]: I1204 15:51:27.960911 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-565ll" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.016307 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq"] Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.073964 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fppzn\" (UniqueName: \"kubernetes.io/projected/08b81a71-e15e-4321-932c-37c52be4cf74-kube-api-access-fppzn\") pod \"openstack-operator-controller-operator-5557b664dc-pw4vq\" (UID: \"08b81a71-e15e-4321-932c-37c52be4cf74\") " pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.131089 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerStarted","Data":"78967977ca007fe8f38c79dfe19309854e7f3bbbecfe134228adb65130cf1185"} Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.164744 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xl2h" podStartSLOduration=3.106004769 podStartE2EDuration="7.164721116s" podCreationTimestamp="2025-12-04 15:51:21 +0000 UTC" firstStartedPulling="2025-12-04 15:51:23.093308485 +0000 UTC m=+927.055845441" lastFinishedPulling="2025-12-04 15:51:27.152024832 +0000 UTC m=+931.114561788" observedRunningTime="2025-12-04 15:51:28.160166789 +0000 UTC m=+932.122703745" watchObservedRunningTime="2025-12-04 15:51:28.164721116 +0000 UTC m=+932.127258072" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.176195 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fppzn\" (UniqueName: \"kubernetes.io/projected/08b81a71-e15e-4321-932c-37c52be4cf74-kube-api-access-fppzn\") pod \"openstack-operator-controller-operator-5557b664dc-pw4vq\" (UID: \"08b81a71-e15e-4321-932c-37c52be4cf74\") " pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.198737 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fppzn\" (UniqueName: \"kubernetes.io/projected/08b81a71-e15e-4321-932c-37c52be4cf74-kube-api-access-fppzn\") pod \"openstack-operator-controller-operator-5557b664dc-pw4vq\" (UID: \"08b81a71-e15e-4321-932c-37c52be4cf74\") " pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.290832 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:28 crc kubenswrapper[4878]: I1204 15:51:28.891187 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq"] Dec 04 15:51:28 crc kubenswrapper[4878]: W1204 15:51:28.894237 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08b81a71_e15e_4321_932c_37c52be4cf74.slice/crio-f0714dc966faa1020a1bcf98bda5325caed7bec90047fe73ceea0dffa9059fda WatchSource:0}: Error finding container f0714dc966faa1020a1bcf98bda5325caed7bec90047fe73ceea0dffa9059fda: Status 404 returned error can't find the container with id f0714dc966faa1020a1bcf98bda5325caed7bec90047fe73ceea0dffa9059fda Dec 04 15:51:29 crc kubenswrapper[4878]: I1204 15:51:29.145109 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" event={"ID":"08b81a71-e15e-4321-932c-37c52be4cf74","Type":"ContainerStarted","Data":"f0714dc966faa1020a1bcf98bda5325caed7bec90047fe73ceea0dffa9059fda"} Dec 04 15:51:31 crc kubenswrapper[4878]: I1204 15:51:31.936524 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:31 crc kubenswrapper[4878]: I1204 15:51:31.937072 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:31 crc kubenswrapper[4878]: I1204 15:51:31.976150 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:32 crc kubenswrapper[4878]: I1204 15:51:32.300727 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:34 crc kubenswrapper[4878]: I1204 15:51:34.399355 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:34 crc kubenswrapper[4878]: I1204 15:51:34.399938 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7xl2h" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="registry-server" containerID="cri-o://78967977ca007fe8f38c79dfe19309854e7f3bbbecfe134228adb65130cf1185" gracePeriod=2 Dec 04 15:51:35 crc kubenswrapper[4878]: I1204 15:51:35.208022 4878 generic.go:334] "Generic (PLEG): container finished" podID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerID="78967977ca007fe8f38c79dfe19309854e7f3bbbecfe134228adb65130cf1185" exitCode=0 Dec 04 15:51:35 crc kubenswrapper[4878]: I1204 15:51:35.208081 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerDied","Data":"78967977ca007fe8f38c79dfe19309854e7f3bbbecfe134228adb65130cf1185"} Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.221503 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.223332 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xl2h" event={"ID":"08cee1a7-e31b-411e-b801-5983e4f516ad","Type":"ContainerDied","Data":"25071fa7000a9ee287f3cc49776548d6c8d333701a2b13896179e54b53423606"} Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.223406 4878 scope.go:117] "RemoveContainer" containerID="78967977ca007fe8f38c79dfe19309854e7f3bbbecfe134228adb65130cf1185" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.254092 4878 scope.go:117] "RemoveContainer" containerID="caa87c0cf11366ebb93ada120cf91edd35559290c8e3449299d6cbe4cd3392d6" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.285412 4878 scope.go:117] "RemoveContainer" containerID="3f67ac67f27958c14dbd31c8440748f3627ab392e74761976693de214b15dfa7" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.370315 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities\") pod \"08cee1a7-e31b-411e-b801-5983e4f516ad\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.370470 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content\") pod \"08cee1a7-e31b-411e-b801-5983e4f516ad\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.370522 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6vx\" (UniqueName: \"kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx\") pod \"08cee1a7-e31b-411e-b801-5983e4f516ad\" (UID: \"08cee1a7-e31b-411e-b801-5983e4f516ad\") " Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.371970 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities" (OuterVolumeSpecName: "utilities") pod "08cee1a7-e31b-411e-b801-5983e4f516ad" (UID: "08cee1a7-e31b-411e-b801-5983e4f516ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.377724 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx" (OuterVolumeSpecName: "kube-api-access-wf6vx") pod "08cee1a7-e31b-411e-b801-5983e4f516ad" (UID: "08cee1a7-e31b-411e-b801-5983e4f516ad"). InnerVolumeSpecName "kube-api-access-wf6vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.398250 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08cee1a7-e31b-411e-b801-5983e4f516ad" (UID: "08cee1a7-e31b-411e-b801-5983e4f516ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.472472 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.472521 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6vx\" (UniqueName: \"kubernetes.io/projected/08cee1a7-e31b-411e-b801-5983e4f516ad-kube-api-access-wf6vx\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:37 crc kubenswrapper[4878]: I1204 15:51:37.472538 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08cee1a7-e31b-411e-b801-5983e4f516ad-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.229487 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" event={"ID":"08b81a71-e15e-4321-932c-37c52be4cf74","Type":"ContainerStarted","Data":"47750efd829a7eed3445ad3ab505580f66726d94202f5666156f82a8296b28bb"} Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.229649 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.231087 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xl2h" Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.263986 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" podStartSLOduration=2.906140777 podStartE2EDuration="11.263969015s" podCreationTimestamp="2025-12-04 15:51:27 +0000 UTC" firstStartedPulling="2025-12-04 15:51:28.896970824 +0000 UTC m=+932.859507790" lastFinishedPulling="2025-12-04 15:51:37.254799072 +0000 UTC m=+941.217336028" observedRunningTime="2025-12-04 15:51:38.258171366 +0000 UTC m=+942.220708322" watchObservedRunningTime="2025-12-04 15:51:38.263969015 +0000 UTC m=+942.226505971" Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.274470 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:38 crc kubenswrapper[4878]: I1204 15:51:38.278886 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xl2h"] Dec 04 15:51:39 crc kubenswrapper[4878]: I1204 15:51:39.188286 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" path="/var/lib/kubelet/pods/08cee1a7-e31b-411e-b801-5983e4f516ad/volumes" Dec 04 15:51:48 crc kubenswrapper[4878]: I1204 15:51:48.293288 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5557b664dc-pw4vq" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.263985 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:13 crc kubenswrapper[4878]: E1204 15:52:13.264797 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="extract-content" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.264814 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="extract-content" Dec 04 15:52:13 crc kubenswrapper[4878]: E1204 15:52:13.264826 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="extract-utilities" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.264832 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="extract-utilities" Dec 04 15:52:13 crc kubenswrapper[4878]: E1204 15:52:13.264846 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="registry-server" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.264853 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="registry-server" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.265048 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cee1a7-e31b-411e-b801-5983e4f516ad" containerName="registry-server" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.266024 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.277288 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.277354 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.277390 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl7s8\" (UniqueName: \"kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.323082 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.378775 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.379106 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.379204 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl7s8\" (UniqueName: \"kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.379282 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.379555 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.402749 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl7s8\" (UniqueName: \"kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8\") pod \"community-operators-6ctck\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:13 crc kubenswrapper[4878]: I1204 15:52:13.582156 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:14 crc kubenswrapper[4878]: I1204 15:52:14.579279 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:15 crc kubenswrapper[4878]: I1204 15:52:15.573821 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerStarted","Data":"ffa9d3a00c8ac8d775eb947c19c9cfac23816394e62f9586cc10e2f1efdc84d1"} Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.580826 4878 generic.go:334] "Generic (PLEG): container finished" podID="f76d16fb-56b7-4c86-9482-c91990583c80" containerID="d238d64eb3ea5a0f01c8b7b4923e3d9608adf16fba4e2f66016b29282a0593ba" exitCode=0 Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.580890 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerDied","Data":"d238d64eb3ea5a0f01c8b7b4923e3d9608adf16fba4e2f66016b29282a0593ba"} Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.645472 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.647083 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.694805 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.734895 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.735174 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.735307 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq8r\" (UniqueName: \"kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.837104 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.837475 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.837584 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.837587 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nq8r\" (UniqueName: \"kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.838009 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.856841 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nq8r\" (UniqueName: \"kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r\") pod \"certified-operators-w5snw\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:16 crc kubenswrapper[4878]: I1204 15:52:16.961834 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.330781 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:17 crc kubenswrapper[4878]: W1204 15:52:17.336842 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aea9c4f_931c_4440_aabc_25d52f240f08.slice/crio-de2b07fc239d54dbafd9f2be4166d64eb9d3d75895e34a86954200df42f57ba5 WatchSource:0}: Error finding container de2b07fc239d54dbafd9f2be4166d64eb9d3d75895e34a86954200df42f57ba5: Status 404 returned error can't find the container with id de2b07fc239d54dbafd9f2be4166d64eb9d3d75895e34a86954200df42f57ba5 Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.587581 4878 generic.go:334] "Generic (PLEG): container finished" podID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerID="3f0c5ddfe9fbf412593d715d15cc76915d6633baa84f3dfebb5f70d10efa3901" exitCode=0 Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.587692 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerDied","Data":"3f0c5ddfe9fbf412593d715d15cc76915d6633baa84f3dfebb5f70d10efa3901"} Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.587861 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerStarted","Data":"de2b07fc239d54dbafd9f2be4166d64eb9d3d75895e34a86954200df42f57ba5"} Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.793975 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.795124 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.805172 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-nm2bk" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.827468 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.828682 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.830848 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-bzm6k" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.840310 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.850752 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78pv\" (UniqueName: \"kubernetes.io/projected/8553cda1-13f9-4f6f-b301-0f757fbf0021-kube-api-access-b78pv\") pod \"barbican-operator-controller-manager-7d9dfd778-tbkt6\" (UID: \"8553cda1-13f9-4f6f-b301-0f757fbf0021\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.850984 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhbs8\" (UniqueName: \"kubernetes.io/projected/a4b2d922-f684-4b6f-93dc-f717d2ece304-kube-api-access-mhbs8\") pod \"cinder-operator-controller-manager-859b6ccc6-d6d6c\" (UID: \"a4b2d922-f684-4b6f-93dc-f717d2ece304\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.856397 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.857745 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.863665 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-sxlqb" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.911653 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.970813 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhbs8\" (UniqueName: \"kubernetes.io/projected/a4b2d922-f684-4b6f-93dc-f717d2ece304-kube-api-access-mhbs8\") pod \"cinder-operator-controller-manager-859b6ccc6-d6d6c\" (UID: \"a4b2d922-f684-4b6f-93dc-f717d2ece304\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.971850 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.977235 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b78pv\" (UniqueName: \"kubernetes.io/projected/8553cda1-13f9-4f6f-b301-0f757fbf0021-kube-api-access-b78pv\") pod \"barbican-operator-controller-manager-7d9dfd778-tbkt6\" (UID: \"8553cda1-13f9-4f6f-b301-0f757fbf0021\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.977340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4r4q\" (UniqueName: \"kubernetes.io/projected/69b41a1e-5d38-4364-97bf-af19372d6324-kube-api-access-n4r4q\") pod \"designate-operator-controller-manager-78b4bc895b-tm72f\" (UID: \"69b41a1e-5d38-4364-97bf-af19372d6324\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.983107 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.984466 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.989365 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-dg2rq" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.989860 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8"] Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.991231 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.993278 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-6chct" Dec 04 15:52:17 crc kubenswrapper[4878]: I1204 15:52:17.997945 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:17.999225 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.002457 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-tzgnw" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.008198 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhbs8\" (UniqueName: \"kubernetes.io/projected/a4b2d922-f684-4b6f-93dc-f717d2ece304-kube-api-access-mhbs8\") pod \"cinder-operator-controller-manager-859b6ccc6-d6d6c\" (UID: \"a4b2d922-f684-4b6f-93dc-f717d2ece304\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.010973 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.013273 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b78pv\" (UniqueName: \"kubernetes.io/projected/8553cda1-13f9-4f6f-b301-0f757fbf0021-kube-api-access-b78pv\") pod \"barbican-operator-controller-manager-7d9dfd778-tbkt6\" (UID: \"8553cda1-13f9-4f6f-b301-0f757fbf0021\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.037725 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.042437 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.062546 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-97pcj"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.068090 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.070740 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.071082 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-9hxrt" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.072962 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-97pcj"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080115 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx8tb\" (UniqueName: \"kubernetes.io/projected/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-kube-api-access-bx8tb\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080204 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bcmt\" (UniqueName: \"kubernetes.io/projected/8b665720-1363-4671-8211-b91712e627df-kube-api-access-2bcmt\") pod \"horizon-operator-controller-manager-68c6d99b8f-spbj8\" (UID: \"8b665720-1363-4671-8211-b91712e627df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080258 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4r4q\" (UniqueName: \"kubernetes.io/projected/69b41a1e-5d38-4364-97bf-af19372d6324-kube-api-access-n4r4q\") pod \"designate-operator-controller-manager-78b4bc895b-tm72f\" (UID: \"69b41a1e-5d38-4364-97bf-af19372d6324\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080307 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dbg\" (UniqueName: \"kubernetes.io/projected/be55b657-228b-4eef-8047-1d4c2577c529-kube-api-access-92dbg\") pod \"heat-operator-controller-manager-5f64f6f8bb-ghx29\" (UID: \"be55b657-228b-4eef-8047-1d4c2577c529\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080373 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.080399 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shxd5\" (UniqueName: \"kubernetes.io/projected/504d742f-8fe2-4006-b94e-bea669f69743-kube-api-access-shxd5\") pod \"glance-operator-controller-manager-77987cd8cd-4bdd7\" (UID: \"504d742f-8fe2-4006-b94e-bea669f69743\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.084939 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.086407 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.090578 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-h9wvm" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.096999 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.111357 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.145290 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4r4q\" (UniqueName: \"kubernetes.io/projected/69b41a1e-5d38-4364-97bf-af19372d6324-kube-api-access-n4r4q\") pod \"designate-operator-controller-manager-78b4bc895b-tm72f\" (UID: \"69b41a1e-5d38-4364-97bf-af19372d6324\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.150012 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.175843 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.177203 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.181066 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-xbvcx" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.182942 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92dbg\" (UniqueName: \"kubernetes.io/projected/be55b657-228b-4eef-8047-1d4c2577c529-kube-api-access-92dbg\") pod \"heat-operator-controller-manager-5f64f6f8bb-ghx29\" (UID: \"be55b657-228b-4eef-8047-1d4c2577c529\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.182987 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.183010 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shxd5\" (UniqueName: \"kubernetes.io/projected/504d742f-8fe2-4006-b94e-bea669f69743-kube-api-access-shxd5\") pod \"glance-operator-controller-manager-77987cd8cd-4bdd7\" (UID: \"504d742f-8fe2-4006-b94e-bea669f69743\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.183034 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx8tb\" (UniqueName: \"kubernetes.io/projected/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-kube-api-access-bx8tb\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.183076 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s8jh\" (UniqueName: \"kubernetes.io/projected/1c586b36-c4f0-4de4-8616-ed14769e76a1-kube-api-access-8s8jh\") pod \"ironic-operator-controller-manager-6c548fd776-bh7x6\" (UID: \"1c586b36-c4f0-4de4-8616-ed14769e76a1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.183124 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bcmt\" (UniqueName: \"kubernetes.io/projected/8b665720-1363-4671-8211-b91712e627df-kube-api-access-2bcmt\") pod \"horizon-operator-controller-manager-68c6d99b8f-spbj8\" (UID: \"8b665720-1363-4671-8211-b91712e627df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.183553 4878 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.183605 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert podName:80bb52cf-c5dd-40ef-b4bf-657d731ad9bc nodeName:}" failed. No retries permitted until 2025-12-04 15:52:18.683585194 +0000 UTC m=+982.646122150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert") pod "infra-operator-controller-manager-57548d458d-97pcj" (UID: "80bb52cf-c5dd-40ef-b4bf-657d731ad9bc") : secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.214034 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.218228 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shxd5\" (UniqueName: \"kubernetes.io/projected/504d742f-8fe2-4006-b94e-bea669f69743-kube-api-access-shxd5\") pod \"glance-operator-controller-manager-77987cd8cd-4bdd7\" (UID: \"504d742f-8fe2-4006-b94e-bea669f69743\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.222850 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bcmt\" (UniqueName: \"kubernetes.io/projected/8b665720-1363-4671-8211-b91712e627df-kube-api-access-2bcmt\") pod \"horizon-operator-controller-manager-68c6d99b8f-spbj8\" (UID: \"8b665720-1363-4671-8211-b91712e627df\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.236533 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92dbg\" (UniqueName: \"kubernetes.io/projected/be55b657-228b-4eef-8047-1d4c2577c529-kube-api-access-92dbg\") pod \"heat-operator-controller-manager-5f64f6f8bb-ghx29\" (UID: \"be55b657-228b-4eef-8047-1d4c2577c529\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.245759 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.246996 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.259250 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-hmw82" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.278556 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.287012 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqnpq\" (UniqueName: \"kubernetes.io/projected/1880d469-6774-4848-9df9-31bfd93bc699-kube-api-access-dqnpq\") pod \"keystone-operator-controller-manager-7765d96ddf-974fj\" (UID: \"1880d469-6774-4848-9df9-31bfd93bc699\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.287126 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s8jh\" (UniqueName: \"kubernetes.io/projected/1c586b36-c4f0-4de4-8616-ed14769e76a1-kube-api-access-8s8jh\") pod \"ironic-operator-controller-manager-6c548fd776-bh7x6\" (UID: \"1c586b36-c4f0-4de4-8616-ed14769e76a1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.287191 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknb9\" (UniqueName: \"kubernetes.io/projected/95fa2571-c576-4132-b55a-cb1211301ce8-kube-api-access-hknb9\") pod \"manila-operator-controller-manager-7c79b5df47-8wvf6\" (UID: \"95fa2571-c576-4132-b55a-cb1211301ce8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.294382 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx8tb\" (UniqueName: \"kubernetes.io/projected/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-kube-api-access-bx8tb\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.313596 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.315067 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.327207 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.329301 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.329411 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.337829 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-blhw4" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.339489 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tzsk6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.339947 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.355824 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.363560 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.370096 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s8jh\" (UniqueName: \"kubernetes.io/projected/1c586b36-c4f0-4de4-8616-ed14769e76a1-kube-api-access-8s8jh\") pod \"ironic-operator-controller-manager-6c548fd776-bh7x6\" (UID: \"1c586b36-c4f0-4de4-8616-ed14769e76a1\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.376096 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-hms78"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.377567 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.386465 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cb4tx" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.388157 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hknb9\" (UniqueName: \"kubernetes.io/projected/95fa2571-c576-4132-b55a-cb1211301ce8-kube-api-access-hknb9\") pod \"manila-operator-controller-manager-7c79b5df47-8wvf6\" (UID: \"95fa2571-c576-4132-b55a-cb1211301ce8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.388211 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4m67\" (UniqueName: \"kubernetes.io/projected/f925d486-d890-44dc-a416-d976e8b7d188-kube-api-access-d4m67\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nbqqp\" (UID: \"f925d486-d890-44dc-a416-d976e8b7d188\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.388264 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl52w\" (UniqueName: \"kubernetes.io/projected/cd7d361b-7311-4d32-aaae-21ba66a40d69-kube-api-access-rl52w\") pod \"octavia-operator-controller-manager-998648c74-hms78\" (UID: \"cd7d361b-7311-4d32-aaae-21ba66a40d69\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.388291 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqnpq\" (UniqueName: \"kubernetes.io/projected/1880d469-6774-4848-9df9-31bfd93bc699-kube-api-access-dqnpq\") pod \"keystone-operator-controller-manager-7765d96ddf-974fj\" (UID: \"1880d469-6774-4848-9df9-31bfd93bc699\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.388344 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvd5\" (UniqueName: \"kubernetes.io/projected/fb61b1d4-aeeb-4526-8515-4d647d61aa9e-kube-api-access-ppvd5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cq22h\" (UID: \"fb61b1d4-aeeb-4526-8515-4d647d61aa9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.411383 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.424707 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.466824 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hknb9\" (UniqueName: \"kubernetes.io/projected/95fa2571-c576-4132-b55a-cb1211301ce8-kube-api-access-hknb9\") pod \"manila-operator-controller-manager-7c79b5df47-8wvf6\" (UID: \"95fa2571-c576-4132-b55a-cb1211301ce8\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.489635 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.491302 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.492733 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvd5\" (UniqueName: \"kubernetes.io/projected/fb61b1d4-aeeb-4526-8515-4d647d61aa9e-kube-api-access-ppvd5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cq22h\" (UID: \"fb61b1d4-aeeb-4526-8515-4d647d61aa9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.492778 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4m67\" (UniqueName: \"kubernetes.io/projected/f925d486-d890-44dc-a416-d976e8b7d188-kube-api-access-d4m67\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nbqqp\" (UID: \"f925d486-d890-44dc-a416-d976e8b7d188\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.492825 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl52w\" (UniqueName: \"kubernetes.io/projected/cd7d361b-7311-4d32-aaae-21ba66a40d69-kube-api-access-rl52w\") pod \"octavia-operator-controller-manager-998648c74-hms78\" (UID: \"cd7d361b-7311-4d32-aaae-21ba66a40d69\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.493532 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqnpq\" (UniqueName: \"kubernetes.io/projected/1880d469-6774-4848-9df9-31bfd93bc699-kube-api-access-dqnpq\") pod \"keystone-operator-controller-manager-7765d96ddf-974fj\" (UID: \"1880d469-6774-4848-9df9-31bfd93bc699\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.496990 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.506094 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-hms78"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.506614 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-bdzqc" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.507195 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.538321 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl52w\" (UniqueName: \"kubernetes.io/projected/cd7d361b-7311-4d32-aaae-21ba66a40d69-kube-api-access-rl52w\") pod \"octavia-operator-controller-manager-998648c74-hms78\" (UID: \"cd7d361b-7311-4d32-aaae-21ba66a40d69\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.552730 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4m67\" (UniqueName: \"kubernetes.io/projected/f925d486-d890-44dc-a416-d976e8b7d188-kube-api-access-d4m67\") pod \"mariadb-operator-controller-manager-56bbcc9d85-nbqqp\" (UID: \"f925d486-d890-44dc-a416-d976e8b7d188\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.569839 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvd5\" (UniqueName: \"kubernetes.io/projected/fb61b1d4-aeeb-4526-8515-4d647d61aa9e-kube-api-access-ppvd5\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-cq22h\" (UID: \"fb61b1d4-aeeb-4526-8515-4d647d61aa9e\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.570202 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.573217 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.576569 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.576595 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-msctd" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.579458 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.594908 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2fmz\" (UniqueName: \"kubernetes.io/projected/82d2275a-c4c7-42a6-9027-cbbf12d0381f-kube-api-access-w2fmz\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.594961 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.595034 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcq22\" (UniqueName: \"kubernetes.io/projected/e3e80c29-b107-4969-93d7-e305e1c7eaa2-kube-api-access-qcq22\") pod \"nova-operator-controller-manager-697bc559fc-wrss6\" (UID: \"e3e80c29-b107-4969-93d7-e305e1c7eaa2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.595094 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrn2w\" (UniqueName: \"kubernetes.io/projected/4b7ee068-250c-4674-8ec2-60dd5c0419be-kube-api-access-mrn2w\") pod \"ovn-operator-controller-manager-b6456fdb6-lg8ds\" (UID: \"4b7ee068-250c-4674-8ec2-60dd5c0419be\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.595418 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.596025 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-2vjlc" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.657468 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.678440 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerStarted","Data":"d734005beaedc20fa5521938fe68666b96461e918e8f46a58a8ef39879b5cfa8"} Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.700133 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcq22\" (UniqueName: \"kubernetes.io/projected/e3e80c29-b107-4969-93d7-e305e1c7eaa2-kube-api-access-qcq22\") pod \"nova-operator-controller-manager-697bc559fc-wrss6\" (UID: \"e3e80c29-b107-4969-93d7-e305e1c7eaa2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.700204 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrn2w\" (UniqueName: \"kubernetes.io/projected/4b7ee068-250c-4674-8ec2-60dd5c0419be-kube-api-access-mrn2w\") pod \"ovn-operator-controller-manager-b6456fdb6-lg8ds\" (UID: \"4b7ee068-250c-4674-8ec2-60dd5c0419be\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.700266 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.700293 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.700312 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2fmz\" (UniqueName: \"kubernetes.io/projected/82d2275a-c4c7-42a6-9027-cbbf12d0381f-kube-api-access-w2fmz\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.700722 4878 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.700773 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert podName:80bb52cf-c5dd-40ef-b4bf-657d731ad9bc nodeName:}" failed. No retries permitted until 2025-12-04 15:52:19.700756003 +0000 UTC m=+983.663292959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert") pod "infra-operator-controller-manager-57548d458d-97pcj" (UID: "80bb52cf-c5dd-40ef-b4bf-657d731ad9bc") : secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.715684 4878 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.715796 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert podName:82d2275a-c4c7-42a6-9027-cbbf12d0381f nodeName:}" failed. No retries permitted until 2025-12-04 15:52:19.215761404 +0000 UTC m=+983.178298360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" (UID: "82d2275a-c4c7-42a6-9027-cbbf12d0381f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.717560 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.717999 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.750403 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.765654 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrn2w\" (UniqueName: \"kubernetes.io/projected/4b7ee068-250c-4674-8ec2-60dd5c0419be-kube-api-access-mrn2w\") pod \"ovn-operator-controller-manager-b6456fdb6-lg8ds\" (UID: \"4b7ee068-250c-4674-8ec2-60dd5c0419be\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.766815 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2fmz\" (UniqueName: \"kubernetes.io/projected/82d2275a-c4c7-42a6-9027-cbbf12d0381f-kube-api-access-w2fmz\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.772341 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcq22\" (UniqueName: \"kubernetes.io/projected/e3e80c29-b107-4969-93d7-e305e1c7eaa2-kube-api-access-qcq22\") pod \"nova-operator-controller-manager-697bc559fc-wrss6\" (UID: \"e3e80c29-b107-4969-93d7-e305e1c7eaa2\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.784051 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.787606 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.795025 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.801051 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.806466 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.808252 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ksgm4" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.810802 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.821703 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.838150 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.848442 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-h8pww" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.855257 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.873728 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.904950 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.906431 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.926778 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlpgr\" (UniqueName: \"kubernetes.io/projected/828a8694-88d9-4658-909b-15188336b78b-kube-api-access-xlpgr\") pod \"placement-operator-controller-manager-78f8948974-7cmxk\" (UID: \"828a8694-88d9-4658-909b-15188336b78b\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.937495 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xtrvk" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.937845 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.939078 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:52:18 crc kubenswrapper[4878]: E1204 15:52:18.945297 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf76d16fb_56b7_4c86_9482_c91990583c80.slice/crio-conmon-d734005beaedc20fa5521938fe68666b96461e918e8f46a58a8ef39879b5cfa8.scope\": RecentStats: unable to find data in memory cache]" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.948389 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-796l9" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.957761 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.970622 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.984347 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4"] Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.986700 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:52:18 crc kubenswrapper[4878]: I1204 15:52:18.990300 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-85qk4" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.003229 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.028153 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8tqk\" (UniqueName: \"kubernetes.io/projected/d37a4080-1835-47f1-bad0-040bcb647c80-kube-api-access-f8tqk\") pod \"watcher-operator-controller-manager-769dc69bc-cnxd4\" (UID: \"d37a4080-1835-47f1-bad0-040bcb647c80\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.028258 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcfw\" (UniqueName: \"kubernetes.io/projected/b89e44e5-1b68-4902-9a89-0b14489e1dfb-kube-api-access-bzcfw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-2vjxr\" (UID: \"b89e44e5-1b68-4902-9a89-0b14489e1dfb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.028288 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlpgr\" (UniqueName: \"kubernetes.io/projected/828a8694-88d9-4658-909b-15188336b78b-kube-api-access-xlpgr\") pod \"placement-operator-controller-manager-78f8948974-7cmxk\" (UID: \"828a8694-88d9-4658-909b-15188336b78b\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.029262 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z692f\" (UniqueName: \"kubernetes.io/projected/a12e358f-da5d-409b-b9d5-a91897588e65-kube-api-access-z692f\") pod \"test-operator-controller-manager-5854674fcc-lmpm5\" (UID: \"a12e358f-da5d-409b-b9d5-a91897588e65\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.029568 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6zcs\" (UniqueName: \"kubernetes.io/projected/9e49df96-9a55-4c5c-864f-cd1aada7db7a-kube-api-access-b6zcs\") pod \"swift-operator-controller-manager-5f8c65bbfc-n8dqh\" (UID: \"9e49df96-9a55-4c5c-864f-cd1aada7db7a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.057177 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.063670 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlpgr\" (UniqueName: \"kubernetes.io/projected/828a8694-88d9-4658-909b-15188336b78b-kube-api-access-xlpgr\") pod \"placement-operator-controller-manager-78f8948974-7cmxk\" (UID: \"828a8694-88d9-4658-909b-15188336b78b\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.064201 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.064305 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.069087 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.069330 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.069554 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-8h5kv" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.093939 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.096130 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.097168 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.103574 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-jns9q" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.103826 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132391 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6zcs\" (UniqueName: \"kubernetes.io/projected/9e49df96-9a55-4c5c-864f-cd1aada7db7a-kube-api-access-b6zcs\") pod \"swift-operator-controller-manager-5f8c65bbfc-n8dqh\" (UID: \"9e49df96-9a55-4c5c-864f-cd1aada7db7a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132730 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132772 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnn9\" (UniqueName: \"kubernetes.io/projected/c863f265-71e4-4bb2-b872-42d21f42fb5c-kube-api-access-xnnn9\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132818 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8tqk\" (UniqueName: \"kubernetes.io/projected/d37a4080-1835-47f1-bad0-040bcb647c80-kube-api-access-f8tqk\") pod \"watcher-operator-controller-manager-769dc69bc-cnxd4\" (UID: \"d37a4080-1835-47f1-bad0-040bcb647c80\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132852 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzcfw\" (UniqueName: \"kubernetes.io/projected/b89e44e5-1b68-4902-9a89-0b14489e1dfb-kube-api-access-bzcfw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-2vjxr\" (UID: \"b89e44e5-1b68-4902-9a89-0b14489e1dfb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.132923 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfb4k\" (UniqueName: \"kubernetes.io/projected/f4bb7917-09ae-4b2a-95c1-172ff14e5771-kube-api-access-sfb4k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czgfc\" (UID: \"f4bb7917-09ae-4b2a-95c1-172ff14e5771\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.133000 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z692f\" (UniqueName: \"kubernetes.io/projected/a12e358f-da5d-409b-b9d5-a91897588e65-kube-api-access-z692f\") pod \"test-operator-controller-manager-5854674fcc-lmpm5\" (UID: \"a12e358f-da5d-409b-b9d5-a91897588e65\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.133027 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.158479 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z692f\" (UniqueName: \"kubernetes.io/projected/a12e358f-da5d-409b-b9d5-a91897588e65-kube-api-access-z692f\") pod \"test-operator-controller-manager-5854674fcc-lmpm5\" (UID: \"a12e358f-da5d-409b-b9d5-a91897588e65\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.160492 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6zcs\" (UniqueName: \"kubernetes.io/projected/9e49df96-9a55-4c5c-864f-cd1aada7db7a-kube-api-access-b6zcs\") pod \"swift-operator-controller-manager-5f8c65bbfc-n8dqh\" (UID: \"9e49df96-9a55-4c5c-864f-cd1aada7db7a\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.168606 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzcfw\" (UniqueName: \"kubernetes.io/projected/b89e44e5-1b68-4902-9a89-0b14489e1dfb-kube-api-access-bzcfw\") pod \"telemetry-operator-controller-manager-76cc84c6bb-2vjxr\" (UID: \"b89e44e5-1b68-4902-9a89-0b14489e1dfb\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.180025 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.197542 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8tqk\" (UniqueName: \"kubernetes.io/projected/d37a4080-1835-47f1-bad0-040bcb647c80-kube-api-access-f8tqk\") pod \"watcher-operator-controller-manager-769dc69bc-cnxd4\" (UID: \"d37a4080-1835-47f1-bad0-040bcb647c80\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.233663 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.233746 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfb4k\" (UniqueName: \"kubernetes.io/projected/f4bb7917-09ae-4b2a-95c1-172ff14e5771-kube-api-access-sfb4k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czgfc\" (UID: \"f4bb7917-09ae-4b2a-95c1-172ff14e5771\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.233776 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.233860 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.233926 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnn9\" (UniqueName: \"kubernetes.io/projected/c863f265-71e4-4bb2-b872-42d21f42fb5c-kube-api-access-xnnn9\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.234835 4878 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.234927 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert podName:82d2275a-c4c7-42a6-9027-cbbf12d0381f nodeName:}" failed. No retries permitted until 2025-12-04 15:52:20.234909053 +0000 UTC m=+984.197446009 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" (UID: "82d2275a-c4c7-42a6-9027-cbbf12d0381f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.236562 4878 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.236605 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:19.736592156 +0000 UTC m=+983.699129112 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "metrics-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.236926 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.237043 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:19.737033277 +0000 UTC m=+983.699570233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.279942 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.280647 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.287587 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.296231 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.348525 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnn9\" (UniqueName: \"kubernetes.io/projected/c863f265-71e4-4bb2-b872-42d21f42fb5c-kube-api-access-xnnn9\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.357421 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.360728 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfb4k\" (UniqueName: \"kubernetes.io/projected/f4bb7917-09ae-4b2a-95c1-172ff14e5771-kube-api-access-sfb4k\") pod \"rabbitmq-cluster-operator-manager-668c99d594-czgfc\" (UID: \"f4bb7917-09ae-4b2a-95c1-172ff14e5771\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.365707 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6"] Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.610212 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.685972 4878 generic.go:334] "Generic (PLEG): container finished" podID="f76d16fb-56b7-4c86-9482-c91990583c80" containerID="d734005beaedc20fa5521938fe68666b96461e918e8f46a58a8ef39879b5cfa8" exitCode=0 Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.686028 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerDied","Data":"d734005beaedc20fa5521938fe68666b96461e918e8f46a58a8ef39879b5cfa8"} Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.687302 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerStarted","Data":"110aa4ac7b01c65d488dd08fa185df94d23cf425a12e31334ea967e117ba4dcf"} Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.718735 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" event={"ID":"a4b2d922-f684-4b6f-93dc-f717d2ece304","Type":"ContainerStarted","Data":"c9e99cc2be8294abb4b80a16ea0d09050dfed2e097a15675bcccb5936cc24405"} Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.721792 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" event={"ID":"8553cda1-13f9-4f6f-b301-0f757fbf0021","Type":"ContainerStarted","Data":"d2315f3ffc9e88eb9aa60dd42c6d558834e62fdb50ec6d517fb5fe5ad5bc85b8"} Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.761770 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.761856 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:19 crc kubenswrapper[4878]: I1204 15:52:19.761907 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.762061 4878 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.762119 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert podName:80bb52cf-c5dd-40ef-b4bf-657d731ad9bc nodeName:}" failed. No retries permitted until 2025-12-04 15:52:21.762100057 +0000 UTC m=+985.724637013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert") pod "infra-operator-controller-manager-57548d458d-97pcj" (UID: "80bb52cf-c5dd-40ef-b4bf-657d731ad9bc") : secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.763584 4878 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.764164 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:20.764127978 +0000 UTC m=+984.726664954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "metrics-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.765284 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:19 crc kubenswrapper[4878]: E1204 15:52:19.765348 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:20.765313298 +0000 UTC m=+984.727850254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.188759 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8"] Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.211266 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f"] Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.294005 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.294314 4878 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.294384 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert podName:82d2275a-c4c7-42a6-9027-cbbf12d0381f nodeName:}" failed. No retries permitted until 2025-12-04 15:52:22.294361179 +0000 UTC m=+986.256898135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" (UID: "82d2275a-c4c7-42a6-9027-cbbf12d0381f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.437401 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29"] Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.733757 4878 generic.go:334] "Generic (PLEG): container finished" podID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerID="110aa4ac7b01c65d488dd08fa185df94d23cf425a12e31334ea967e117ba4dcf" exitCode=0 Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.734683 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerDied","Data":"110aa4ac7b01c65d488dd08fa185df94d23cf425a12e31334ea967e117ba4dcf"} Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.740499 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" event={"ID":"be55b657-228b-4eef-8047-1d4c2577c529","Type":"ContainerStarted","Data":"36054b4211d4b437c87964bc2deb4dc418c0a568e1f2f4e37ab424b3946b7e54"} Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.747887 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" event={"ID":"69b41a1e-5d38-4364-97bf-af19372d6324","Type":"ContainerStarted","Data":"193ab48f3928315d17a5ae993bd5dc13a51ecb00561af4c536a7707d32789a6f"} Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.749391 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" event={"ID":"8b665720-1363-4671-8211-b91712e627df","Type":"ContainerStarted","Data":"c8ddca87634c28a715bba5a91a41d05a81a9e6135b8c40a40a2988253dfc0032"} Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.754813 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerStarted","Data":"b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8"} Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.793543 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6ctck" podStartSLOduration=4.726231952 podStartE2EDuration="7.79351758s" podCreationTimestamp="2025-12-04 15:52:13 +0000 UTC" firstStartedPulling="2025-12-04 15:52:16.58211999 +0000 UTC m=+980.544656936" lastFinishedPulling="2025-12-04 15:52:19.649405608 +0000 UTC m=+983.611942564" observedRunningTime="2025-12-04 15:52:20.781798413 +0000 UTC m=+984.744335369" watchObservedRunningTime="2025-12-04 15:52:20.79351758 +0000 UTC m=+984.756054536" Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.800459 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.800551 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.800775 4878 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.800821 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.800836 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:22.800817215 +0000 UTC m=+986.763354171 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "metrics-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: E1204 15:52:20.800863 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:22.800851816 +0000 UTC m=+986.763388772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.943748 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7"] Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.954386 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6"] Dec 04 15:52:20 crc kubenswrapper[4878]: W1204 15:52:20.957828 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95fa2571_c576_4132_b55a_cb1211301ce8.slice/crio-0fd459e3214e06ddce3b091e6ce504d8aa9f59c65ec524fee48e1ea7906e12b0 WatchSource:0}: Error finding container 0fd459e3214e06ddce3b091e6ce504d8aa9f59c65ec524fee48e1ea7906e12b0: Status 404 returned error can't find the container with id 0fd459e3214e06ddce3b091e6ce504d8aa9f59c65ec524fee48e1ea7906e12b0 Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.961838 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6"] Dec 04 15:52:20 crc kubenswrapper[4878]: W1204 15:52:20.962472 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b7ee068_250c_4674_8ec2_60dd5c0419be.slice/crio-bac9dbc606248c72b4bf73e7e383f2b94db2cfe232ce84d21b782c315db163a3 WatchSource:0}: Error finding container bac9dbc606248c72b4bf73e7e383f2b94db2cfe232ce84d21b782c315db163a3: Status 404 returned error can't find the container with id bac9dbc606248c72b4bf73e7e383f2b94db2cfe232ce84d21b782c315db163a3 Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.974247 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds"] Dec 04 15:52:20 crc kubenswrapper[4878]: W1204 15:52:20.976473 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3e80c29_b107_4969_93d7_e305e1c7eaa2.slice/crio-cd54a6913b0c6a5f1c5e082d932ede69ceb008e10db2f856d753e9ae5ed2c272 WatchSource:0}: Error finding container cd54a6913b0c6a5f1c5e082d932ede69ceb008e10db2f856d753e9ae5ed2c272: Status 404 returned error can't find the container with id cd54a6913b0c6a5f1c5e082d932ede69ceb008e10db2f856d753e9ae5ed2c272 Dec 04 15:52:20 crc kubenswrapper[4878]: I1204 15:52:20.995383 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.014935 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.022401 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.067784 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.072027 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.080883 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5"] Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.095113 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfb4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-czgfc_openstack-operators(f4bb7917-09ae-4b2a-95c1-172ff14e5771): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.095808 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8tqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-cnxd4_openstack-operators(d37a4080-1835-47f1-bad0-040bcb647c80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.096421 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podUID="f4bb7917-09ae-4b2a-95c1-172ff14e5771" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.097189 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rl52w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-hms78_openstack-operators(cd7d361b-7311-4d32-aaae-21ba66a40d69): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.097338 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlpgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-7cmxk_openstack-operators(828a8694-88d9-4658-909b-15188336b78b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.097842 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8tqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-cnxd4_openstack-operators(d37a4080-1835-47f1-bad0-040bcb647c80): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.102017 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-cq22h_openstack-operators(fb61b1d4-aeeb-4526-8515-4d647d61aa9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.102222 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podUID="d37a4080-1835-47f1-bad0-040bcb647c80" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.106239 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlpgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-7cmxk_openstack-operators(828a8694-88d9-4658-909b-15188336b78b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.106386 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rl52w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-hms78_openstack-operators(cd7d361b-7311-4d32-aaae-21ba66a40d69): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.106573 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-cq22h_openstack-operators(fb61b1d4-aeeb-4526-8515-4d647d61aa9e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.106703 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc"] Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.107407 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" podUID="828a8694-88d9-4658-909b-15188336b78b" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.107508 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podUID="cd7d361b-7311-4d32-aaae-21ba66a40d69" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.109209 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podUID="fb61b1d4-aeeb-4526-8515-4d647d61aa9e" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.117657 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.122382 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.126360 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-hms78"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.132664 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk"] Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.791839 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" event={"ID":"1880d469-6774-4848-9df9-31bfd93bc699","Type":"ContainerStarted","Data":"ade0691cb6034e06ea56ffe9ddf525e8bc8b42878fabd2f27d439f6d699b74a4"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.795306 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" event={"ID":"f4bb7917-09ae-4b2a-95c1-172ff14e5771","Type":"ContainerStarted","Data":"64e877b4fd6e15d7e8093ceff58af6972875181d6be2abd2457514ca617a58ab"} Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.798023 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podUID="f4bb7917-09ae-4b2a-95c1-172ff14e5771" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.798433 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" event={"ID":"e3e80c29-b107-4969-93d7-e305e1c7eaa2","Type":"ContainerStarted","Data":"cd54a6913b0c6a5f1c5e082d932ede69ceb008e10db2f856d753e9ae5ed2c272"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.803241 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" event={"ID":"9e49df96-9a55-4c5c-864f-cd1aada7db7a","Type":"ContainerStarted","Data":"ba9329efefb583f8570c3d36ecafc8d27f2d9da8e7ab143c501a1ec5ee3897c2"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.804523 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" event={"ID":"d37a4080-1835-47f1-bad0-040bcb647c80","Type":"ContainerStarted","Data":"12e615b28c3a9f6359a9bfaea0a3cf1967804f915fd1635ba92fed5a19e05154"} Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.807087 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podUID="d37a4080-1835-47f1-bad0-040bcb647c80" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.808570 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" event={"ID":"828a8694-88d9-4658-909b-15188336b78b","Type":"ContainerStarted","Data":"e31a6ac0ebda513aff4f3c87381d38321306cb18495c6a56f931a38e6509a11c"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.810569 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" event={"ID":"1c586b36-c4f0-4de4-8616-ed14769e76a1","Type":"ContainerStarted","Data":"9a849f31c577025667f6e0d9f9853fe29e654a52baa827c5d3db3ac6a37d9a23"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.812042 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" event={"ID":"504d742f-8fe2-4006-b94e-bea669f69743","Type":"ContainerStarted","Data":"193af94afbb5c2202e876f568c483298b3102267fcdf31160fc8c754c1d39060"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.814766 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" event={"ID":"a12e358f-da5d-409b-b9d5-a91897588e65","Type":"ContainerStarted","Data":"51ec35118c8a5bf718344561d43a3e3cecd32c2cab6e693a78f293b997256b84"} Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.817179 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" podUID="828a8694-88d9-4658-909b-15188336b78b" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.819106 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" event={"ID":"fb61b1d4-aeeb-4526-8515-4d647d61aa9e","Type":"ContainerStarted","Data":"72a086f0fe77f641d66a710e3b8d6e3ac36d66643bc4d469557b72d61c56aac7"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.821599 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.822496 4878 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.822554 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert podName:80bb52cf-c5dd-40ef-b4bf-657d731ad9bc nodeName:}" failed. No retries permitted until 2025-12-04 15:52:25.822533133 +0000 UTC m=+989.785070089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert") pod "infra-operator-controller-manager-57548d458d-97pcj" (UID: "80bb52cf-c5dd-40ef-b4bf-657d731ad9bc") : secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.825758 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" event={"ID":"f925d486-d890-44dc-a416-d976e8b7d188","Type":"ContainerStarted","Data":"ceb4005cf62538ac6b266d48f078c02f9a814a16da24873104070262e2afeaeb"} Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.826142 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podUID="fb61b1d4-aeeb-4526-8515-4d647d61aa9e" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.847230 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" event={"ID":"b89e44e5-1b68-4902-9a89-0b14489e1dfb","Type":"ContainerStarted","Data":"26c33ef37068be52dbd3b3639fd4ad8e1553f52639a8066d5176a3094f105f3e"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.870298 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" event={"ID":"95fa2571-c576-4132-b55a-cb1211301ce8","Type":"ContainerStarted","Data":"0fd459e3214e06ddce3b091e6ce504d8aa9f59c65ec524fee48e1ea7906e12b0"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.881413 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerStarted","Data":"eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.887453 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" event={"ID":"cd7d361b-7311-4d32-aaae-21ba66a40d69","Type":"ContainerStarted","Data":"01ca1c7760d0690987fd01f044ccf234c1e1e612fc62c3be08e43aa1424c9c37"} Dec 04 15:52:21 crc kubenswrapper[4878]: E1204 15:52:21.892297 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podUID="cd7d361b-7311-4d32-aaae-21ba66a40d69" Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.899883 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" event={"ID":"4b7ee068-250c-4674-8ec2-60dd5c0419be","Type":"ContainerStarted","Data":"bac9dbc606248c72b4bf73e7e383f2b94db2cfe232ce84d21b782c315db163a3"} Dec 04 15:52:21 crc kubenswrapper[4878]: I1204 15:52:21.922054 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w5snw" podStartSLOduration=2.214575219 podStartE2EDuration="5.922029637s" podCreationTimestamp="2025-12-04 15:52:16 +0000 UTC" firstStartedPulling="2025-12-04 15:52:17.589462162 +0000 UTC m=+981.551999108" lastFinishedPulling="2025-12-04 15:52:21.29691657 +0000 UTC m=+985.259453526" observedRunningTime="2025-12-04 15:52:21.90832684 +0000 UTC m=+985.870863796" watchObservedRunningTime="2025-12-04 15:52:21.922029637 +0000 UTC m=+985.884566613" Dec 04 15:52:22 crc kubenswrapper[4878]: I1204 15:52:22.331752 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.331978 4878 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.332081 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert podName:82d2275a-c4c7-42a6-9027-cbbf12d0381f nodeName:}" failed. No retries permitted until 2025-12-04 15:52:26.332062919 +0000 UTC m=+990.294599875 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" (UID: "82d2275a-c4c7-42a6-9027-cbbf12d0381f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: I1204 15:52:22.838938 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:22 crc kubenswrapper[4878]: I1204 15:52:22.839311 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.839158 4878 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.840620 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:26.840594349 +0000 UTC m=+990.803131305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "metrics-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.840688 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.840715 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:26.840706402 +0000 UTC m=+990.803243358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.920939 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podUID="f4bb7917-09ae-4b2a-95c1-172ff14e5771" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.921447 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podUID="d37a4080-1835-47f1-bad0-040bcb647c80" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.921611 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" podUID="828a8694-88d9-4658-909b-15188336b78b" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.921941 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podUID="cd7d361b-7311-4d32-aaae-21ba66a40d69" Dec 04 15:52:22 crc kubenswrapper[4878]: E1204 15:52:22.923482 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podUID="fb61b1d4-aeeb-4526-8515-4d647d61aa9e" Dec 04 15:52:23 crc kubenswrapper[4878]: I1204 15:52:23.585388 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:23 crc kubenswrapper[4878]: I1204 15:52:23.585968 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:23 crc kubenswrapper[4878]: I1204 15:52:23.691980 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:25 crc kubenswrapper[4878]: I1204 15:52:25.107179 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:25 crc kubenswrapper[4878]: I1204 15:52:25.830853 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:25 crc kubenswrapper[4878]: I1204 15:52:25.836926 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:25 crc kubenswrapper[4878]: E1204 15:52:25.837076 4878 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:25 crc kubenswrapper[4878]: E1204 15:52:25.837144 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert podName:80bb52cf-c5dd-40ef-b4bf-657d731ad9bc nodeName:}" failed. No retries permitted until 2025-12-04 15:52:33.837124402 +0000 UTC m=+997.799661358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert") pod "infra-operator-controller-manager-57548d458d-97pcj" (UID: "80bb52cf-c5dd-40ef-b4bf-657d731ad9bc") : secret "infra-operator-webhook-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: I1204 15:52:26.344497 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.344746 4878 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.344865 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert podName:82d2275a-c4c7-42a6-9027-cbbf12d0381f nodeName:}" failed. No retries permitted until 2025-12-04 15:52:34.344832951 +0000 UTC m=+998.307369907 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" (UID: "82d2275a-c4c7-42a6-9027-cbbf12d0381f") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: I1204 15:52:26.853226 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:26 crc kubenswrapper[4878]: I1204 15:52:26.853396 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.853398 4878 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.853504 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:34.853470964 +0000 UTC m=+998.816007940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "metrics-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.853611 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: E1204 15:52:26.853709 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:34.853682229 +0000 UTC m=+998.816219225 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:26 crc kubenswrapper[4878]: I1204 15:52:26.962027 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:26 crc kubenswrapper[4878]: I1204 15:52:26.962109 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:27 crc kubenswrapper[4878]: I1204 15:52:27.017626 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6ctck" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" containerID="cri-o://b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" gracePeriod=2 Dec 04 15:52:27 crc kubenswrapper[4878]: I1204 15:52:27.024993 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:27 crc kubenswrapper[4878]: I1204 15:52:27.099317 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:28 crc kubenswrapper[4878]: I1204 15:52:28.027240 4878 generic.go:334] "Generic (PLEG): container finished" podID="f76d16fb-56b7-4c86-9482-c91990583c80" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" exitCode=0 Dec 04 15:52:28 crc kubenswrapper[4878]: I1204 15:52:28.027291 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerDied","Data":"b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8"} Dec 04 15:52:28 crc kubenswrapper[4878]: I1204 15:52:28.230399 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:29 crc kubenswrapper[4878]: I1204 15:52:29.034154 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w5snw" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" containerID="cri-o://eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" gracePeriod=2 Dec 04 15:52:29 crc kubenswrapper[4878]: E1204 15:52:29.098939 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aea9c4f_931c_4440_aabc_25d52f240f08.slice/crio-eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b.scope\": RecentStats: unable to find data in memory cache]" Dec 04 15:52:30 crc kubenswrapper[4878]: I1204 15:52:30.041312 4878 generic.go:334] "Generic (PLEG): container finished" podID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" exitCode=0 Dec 04 15:52:30 crc kubenswrapper[4878]: I1204 15:52:30.041623 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerDied","Data":"eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b"} Dec 04 15:52:30 crc kubenswrapper[4878]: I1204 15:52:30.840968 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:52:30 crc kubenswrapper[4878]: I1204 15:52:30.841080 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:52:33 crc kubenswrapper[4878]: E1204 15:52:33.583266 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:33 crc kubenswrapper[4878]: E1204 15:52:33.584412 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:33 crc kubenswrapper[4878]: E1204 15:52:33.584950 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:33 crc kubenswrapper[4878]: E1204 15:52:33.585036 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-6ctck" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:52:33 crc kubenswrapper[4878]: I1204 15:52:33.882063 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:33 crc kubenswrapper[4878]: I1204 15:52:33.887922 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/80bb52cf-c5dd-40ef-b4bf-657d731ad9bc-cert\") pod \"infra-operator-controller-manager-57548d458d-97pcj\" (UID: \"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.005956 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.393213 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.400593 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82d2275a-c4c7-42a6-9027-cbbf12d0381f-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz\" (UID: \"82d2275a-c4c7-42a6-9027-cbbf12d0381f\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.569712 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.901229 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.901320 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:34 crc kubenswrapper[4878]: E1204 15:52:34.901449 4878 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 04 15:52:34 crc kubenswrapper[4878]: E1204 15:52:34.901539 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs podName:c863f265-71e4-4bb2-b872-42d21f42fb5c nodeName:}" failed. No retries permitted until 2025-12-04 15:52:50.901512928 +0000 UTC m=+1014.864049884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs") pod "openstack-operator-controller-manager-5f86dd88bc-blw62" (UID: "c863f265-71e4-4bb2-b872-42d21f42fb5c") : secret "webhook-server-cert" not found Dec 04 15:52:34 crc kubenswrapper[4878]: I1204 15:52:34.905448 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-metrics-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.508826 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85" Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.509286 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:9f68d7bc8c6bce38f46dee8a8272d5365c49fe7b32b2af52e8ac884e212f3a85,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4r4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-tm72f_openstack-operators(69b41a1e-5d38-4364-97bf-af19372d6324): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.963836 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.964389 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.964708 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:36 crc kubenswrapper[4878]: E1204 15:52:36.964785 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-w5snw" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:52:40 crc kubenswrapper[4878]: E1204 15:52:40.559384 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Dec 04 15:52:40 crc kubenswrapper[4878]: E1204 15:52:40.560576 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8s8jh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-bh7x6_openstack-operators(1c586b36-c4f0-4de4-8616-ed14769e76a1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:42 crc kubenswrapper[4878]: E1204 15:52:42.550811 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 04 15:52:42 crc kubenswrapper[4878]: E1204 15:52:42.551518 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dqnpq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-974fj_openstack-operators(1880d469-6774-4848-9df9-31bfd93bc699): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.220746 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.221022 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z692f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-lmpm5_openstack-operators(a12e358f-da5d-409b-b9d5-a91897588e65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.583401 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.583721 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.583892 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.583918 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-6ctck" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.981322 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 04 15:52:43 crc kubenswrapper[4878]: E1204 15:52:43.981589 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mrn2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-lg8ds_openstack-operators(4b7ee068-250c-4674-8ec2-60dd5c0419be): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.227542 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385" Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.227784 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bzcfw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-2vjxr_openstack-operators(b89e44e5-1b68-4902-9a89-0b14489e1dfb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.966126 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.966565 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.966968 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:46 crc kubenswrapper[4878]: E1204 15:52:46.967011 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-w5snw" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:52:47 crc kubenswrapper[4878]: E1204 15:52:47.526586 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801" Dec 04 15:52:47 crc kubenswrapper[4878]: E1204 15:52:47.527276 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:1d60701214b39cdb0fa70bbe5710f9b131139a9f4b482c2db4058a04daefb801,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mhbs8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-859b6ccc6-d6d6c_openstack-operators(a4b2d922-f684-4b6f-93dc-f717d2ece304): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:48 crc kubenswrapper[4878]: E1204 15:52:48.175456 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d" Dec 04 15:52:48 crc kubenswrapper[4878]: E1204 15:52:48.175741 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b6zcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-n8dqh_openstack-operators(9e49df96-9a55-4c5c-864f-cd1aada7db7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:48 crc kubenswrapper[4878]: E1204 15:52:48.876733 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809" Dec 04 15:52:48 crc kubenswrapper[4878]: E1204 15:52:48.876995 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:abdb733b01e92ac17f565762f30f1d075b44c16421bd06e557f6bb3c319e1809,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-shxd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-77987cd8cd-4bdd7_openstack-operators(504d742f-8fe2-4006-b94e-bea669f69743): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:50 crc kubenswrapper[4878]: E1204 15:52:50.395979 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 04 15:52:50 crc kubenswrapper[4878]: E1204 15:52:50.396503 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2bcmt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-spbj8_openstack-operators(8b665720-1363-4671-8211-b91712e627df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:50 crc kubenswrapper[4878]: I1204 15:52:50.974350 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:50 crc kubenswrapper[4878]: I1204 15:52:50.982042 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c863f265-71e4-4bb2-b872-42d21f42fb5c-webhook-certs\") pod \"openstack-operator-controller-manager-5f86dd88bc-blw62\" (UID: \"c863f265-71e4-4bb2-b872-42d21f42fb5c\") " pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:51 crc kubenswrapper[4878]: I1204 15:52:51.099941 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:52:51 crc kubenswrapper[4878]: E1204 15:52:51.276827 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 04 15:52:51 crc kubenswrapper[4878]: E1204 15:52:51.277051 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qcq22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-wrss6_openstack-operators(e3e80c29-b107-4969-93d7-e305e1c7eaa2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:51 crc kubenswrapper[4878]: E1204 15:52:51.824684 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 04 15:52:51 crc kubenswrapper[4878]: E1204 15:52:51.825086 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ppvd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-cq22h_openstack-operators(fb61b1d4-aeeb-4526-8515-4d647d61aa9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:51 crc kubenswrapper[4878]: I1204 15:52:51.827606 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 15:52:53 crc kubenswrapper[4878]: E1204 15:52:53.584113 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:53 crc kubenswrapper[4878]: E1204 15:52:53.584972 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:53 crc kubenswrapper[4878]: E1204 15:52:53.585338 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:53 crc kubenswrapper[4878]: E1204 15:52:53.585452 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-6ctck" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:52:54 crc kubenswrapper[4878]: E1204 15:52:54.232920 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 04 15:52:54 crc kubenswrapper[4878]: E1204 15:52:54.233272 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rl52w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-hms78_openstack-operators(cd7d361b-7311-4d32-aaae-21ba66a40d69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:56 crc kubenswrapper[4878]: E1204 15:52:56.962910 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:56 crc kubenswrapper[4878]: E1204 15:52:56.963554 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:56 crc kubenswrapper[4878]: E1204 15:52:56.964183 4878 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" cmd=["grpc_health_probe","-addr=:50051"] Dec 04 15:52:56 crc kubenswrapper[4878]: E1204 15:52:56.964243 4878 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-w5snw" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:52:58 crc kubenswrapper[4878]: E1204 15:52:58.680978 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Dec 04 15:52:58 crc kubenswrapper[4878]: E1204 15:52:58.681375 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f8tqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-cnxd4_openstack-operators(d37a4080-1835-47f1-bad0-040bcb647c80): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.754407 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.756102 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787223 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities\") pod \"f76d16fb-56b7-4c86-9482-c91990583c80\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787295 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content\") pod \"f76d16fb-56b7-4c86-9482-c91990583c80\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787318 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities\") pod \"3aea9c4f-931c-4440-aabc-25d52f240f08\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787335 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content\") pod \"3aea9c4f-931c-4440-aabc-25d52f240f08\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787384 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl7s8\" (UniqueName: \"kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8\") pod \"f76d16fb-56b7-4c86-9482-c91990583c80\" (UID: \"f76d16fb-56b7-4c86-9482-c91990583c80\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.787431 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nq8r\" (UniqueName: \"kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r\") pod \"3aea9c4f-931c-4440-aabc-25d52f240f08\" (UID: \"3aea9c4f-931c-4440-aabc-25d52f240f08\") " Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.788593 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities" (OuterVolumeSpecName: "utilities") pod "3aea9c4f-931c-4440-aabc-25d52f240f08" (UID: "3aea9c4f-931c-4440-aabc-25d52f240f08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.788718 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities" (OuterVolumeSpecName: "utilities") pod "f76d16fb-56b7-4c86-9482-c91990583c80" (UID: "f76d16fb-56b7-4c86-9482-c91990583c80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.798746 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r" (OuterVolumeSpecName: "kube-api-access-5nq8r") pod "3aea9c4f-931c-4440-aabc-25d52f240f08" (UID: "3aea9c4f-931c-4440-aabc-25d52f240f08"). InnerVolumeSpecName "kube-api-access-5nq8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.807419 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8" (OuterVolumeSpecName: "kube-api-access-fl7s8") pod "f76d16fb-56b7-4c86-9482-c91990583c80" (UID: "f76d16fb-56b7-4c86-9482-c91990583c80"). InnerVolumeSpecName "kube-api-access-fl7s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.857562 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f76d16fb-56b7-4c86-9482-c91990583c80" (UID: "f76d16fb-56b7-4c86-9482-c91990583c80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.888954 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.889001 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f76d16fb-56b7-4c86-9482-c91990583c80-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.889016 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.889030 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl7s8\" (UniqueName: \"kubernetes.io/projected/f76d16fb-56b7-4c86-9482-c91990583c80-kube-api-access-fl7s8\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.889043 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nq8r\" (UniqueName: \"kubernetes.io/projected/3aea9c4f-931c-4440-aabc-25d52f240f08-kube-api-access-5nq8r\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.910042 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3aea9c4f-931c-4440-aabc-25d52f240f08" (UID: "3aea9c4f-931c-4440-aabc-25d52f240f08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:52:58 crc kubenswrapper[4878]: I1204 15:52:58.990508 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3aea9c4f-931c-4440-aabc-25d52f240f08-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.341991 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6ctck" event={"ID":"f76d16fb-56b7-4c86-9482-c91990583c80","Type":"ContainerDied","Data":"ffa9d3a00c8ac8d775eb947c19c9cfac23816394e62f9586cc10e2f1efdc84d1"} Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.342060 4878 scope.go:117] "RemoveContainer" containerID="b64a91c32f6f64c720353d3f31a2bfa12a79613f695697b84bce3477eeed81a8" Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.342237 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6ctck" Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.350410 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5snw" event={"ID":"3aea9c4f-931c-4440-aabc-25d52f240f08","Type":"ContainerDied","Data":"de2b07fc239d54dbafd9f2be4166d64eb9d3d75895e34a86954200df42f57ba5"} Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.350523 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5snw" Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.380183 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.385339 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6ctck"] Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.395096 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.402389 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w5snw"] Dec 04 15:52:59 crc kubenswrapper[4878]: E1204 15:52:59.615772 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 04 15:52:59 crc kubenswrapper[4878]: E1204 15:52:59.616155 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfb4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-czgfc_openstack-operators(f4bb7917-09ae-4b2a-95c1-172ff14e5771): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:52:59 crc kubenswrapper[4878]: E1204 15:52:59.617444 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podUID="f4bb7917-09ae-4b2a-95c1-172ff14e5771" Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.814602 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-97pcj"] Dec 04 15:52:59 crc kubenswrapper[4878]: I1204 15:52:59.853041 4878 scope.go:117] "RemoveContainer" containerID="d734005beaedc20fa5521938fe68666b96461e918e8f46a58a8ef39879b5cfa8" Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.047936 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62"] Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.080860 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz"] Dec 04 15:53:00 crc kubenswrapper[4878]: W1204 15:53:00.344246 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80bb52cf_c5dd_40ef_b4bf_657d731ad9bc.slice/crio-875fef88c84961298d366905a0d75fa8d6368d3be9fc411a52900c8e55a6209c WatchSource:0}: Error finding container 875fef88c84961298d366905a0d75fa8d6368d3be9fc411a52900c8e55a6209c: Status 404 returned error can't find the container with id 875fef88c84961298d366905a0d75fa8d6368d3be9fc411a52900c8e55a6209c Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.363634 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" event={"ID":"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc","Type":"ContainerStarted","Data":"875fef88c84961298d366905a0d75fa8d6368d3be9fc411a52900c8e55a6209c"} Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.395827 4878 scope.go:117] "RemoveContainer" containerID="d238d64eb3ea5a0f01c8b7b4923e3d9608adf16fba4e2f66016b29282a0593ba" Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.670526 4878 scope.go:117] "RemoveContainer" containerID="eff3cc896ec6e9e627c5bff6d3f43f843114adc845aab24139aa15a36c49293b" Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.840320 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.840392 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:53:00 crc kubenswrapper[4878]: I1204 15:53:00.899134 4878 scope.go:117] "RemoveContainer" containerID="110aa4ac7b01c65d488dd08fa185df94d23cf425a12e31334ea967e117ba4dcf" Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.145240 4878 scope.go:117] "RemoveContainer" containerID="3f0c5ddfe9fbf412593d715d15cc76915d6633baa84f3dfebb5f70d10efa3901" Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.197139 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" path="/var/lib/kubelet/pods/3aea9c4f-931c-4440-aabc-25d52f240f08/volumes" Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.197994 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" path="/var/lib/kubelet/pods/f76d16fb-56b7-4c86-9482-c91990583c80/volumes" Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.381147 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" event={"ID":"82d2275a-c4c7-42a6-9027-cbbf12d0381f","Type":"ContainerStarted","Data":"e53ee24c08cd4f58725b779e665ba859e37b5128ba24761a2984db8b2b90d80d"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.400266 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" event={"ID":"95fa2571-c576-4132-b55a-cb1211301ce8","Type":"ContainerStarted","Data":"f96997ab879ab5afacbb90c0ea68a2ed14562824b785cf854b8a7ab459ab8f19"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.405266 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" event={"ID":"828a8694-88d9-4658-909b-15188336b78b","Type":"ContainerStarted","Data":"d9ea4e4ae6894a31745e3b2c46646e0e0f83588902aefcc4f0d9143ac6083c65"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.406608 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" event={"ID":"8553cda1-13f9-4f6f-b301-0f757fbf0021","Type":"ContainerStarted","Data":"037a8cefc5d452af28aefd27db2e61db1dd0b5f3c3cbfa21ebe44c1a52a851df"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.408646 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" event={"ID":"be55b657-228b-4eef-8047-1d4c2577c529","Type":"ContainerStarted","Data":"0e80fa883d92034d326ebc6b4baf812ecb60ab182f94dd2caa8aefbbdb9d297e"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.411650 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" event={"ID":"f925d486-d890-44dc-a416-d976e8b7d188","Type":"ContainerStarted","Data":"0586897d6899ff8c0d9628addab3a9abaf2a81289c6fd2dd69dd248e0c841a2c"} Dec 04 15:53:01 crc kubenswrapper[4878]: I1204 15:53:01.413375 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" event={"ID":"c863f265-71e4-4bb2-b872-42d21f42fb5c","Type":"ContainerStarted","Data":"cf9c5d9dd10730f074860450a7cbf37b0e5b582545fbb8394b23a0653469bf10"} Dec 04 15:53:04 crc kubenswrapper[4878]: I1204 15:53:04.517023 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" event={"ID":"c863f265-71e4-4bb2-b872-42d21f42fb5c","Type":"ContainerStarted","Data":"d93d90217bf1505155b01511fdc208cd3058757e2e3d5e6e1879d7a6012f6bb3"} Dec 04 15:53:04 crc kubenswrapper[4878]: I1204 15:53:04.517544 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:53:04 crc kubenswrapper[4878]: I1204 15:53:04.550463 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" podStartSLOduration=46.550422234 podStartE2EDuration="46.550422234s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:53:04.54156117 +0000 UTC m=+1028.504098136" watchObservedRunningTime="2025-12-04 15:53:04.550422234 +0000 UTC m=+1028.512959190" Dec 04 15:53:05 crc kubenswrapper[4878]: E1204 15:53:05.767497 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 04 15:53:05 crc kubenswrapper[4878]: E1204 15:53:05.767757 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n4r4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-78b4bc895b-tm72f_openstack-operators(69b41a1e-5d38-4364-97bf-af19372d6324): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:53:05 crc kubenswrapper[4878]: E1204 15:53:05.769008 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" podUID="69b41a1e-5d38-4364-97bf-af19372d6324" Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.433081 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" podUID="a4b2d922-f684-4b6f-93dc-f717d2ece304" Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.494357 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" podUID="504d742f-8fe2-4006-b94e-bea669f69743" Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.545960 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" podUID="9e49df96-9a55-4c5c-864f-cd1aada7db7a" Dec 04 15:53:07 crc kubenswrapper[4878]: I1204 15:53:07.555575 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" event={"ID":"504d742f-8fe2-4006-b94e-bea669f69743","Type":"ContainerStarted","Data":"669ab1c054d51472980fe2659d1ec97167b69c3a23cd2fb8dd52261b7806f5a2"} Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.658355 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podUID="cd7d361b-7311-4d32-aaae-21ba66a40d69" Dec 04 15:53:07 crc kubenswrapper[4878]: I1204 15:53:07.676931 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" event={"ID":"a4b2d922-f684-4b6f-93dc-f717d2ece304","Type":"ContainerStarted","Data":"ecbb3fa72c84ca9fdab6ca5f0f61280ac0a7757d8ac97ebfe9616ffb0ef6f4b5"} Dec 04 15:53:07 crc kubenswrapper[4878]: I1204 15:53:07.690562 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" event={"ID":"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc","Type":"ContainerStarted","Data":"55fc1eab082447869ed2d771bf02b48a346892de2dce3f8b153e95e6dc92e214"} Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.833420 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" podUID="1c586b36-c4f0-4de4-8616-ed14769e76a1" Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.951077 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" podUID="a12e358f-da5d-409b-b9d5-a91897588e65" Dec 04 15:53:07 crc kubenswrapper[4878]: E1204 15:53:07.982273 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podUID="fb61b1d4-aeeb-4526-8515-4d647d61aa9e" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.157152 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" podUID="8b665720-1363-4671-8211-b91712e627df" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.443476 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" podUID="e3e80c29-b107-4969-93d7-e305e1c7eaa2" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.518702 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" podUID="1880d469-6774-4848-9df9-31bfd93bc699" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.719274 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" podUID="b89e44e5-1b68-4902-9a89-0b14489e1dfb" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.749282 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" podUID="4b7ee068-250c-4674-8ec2-60dd5c0419be" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.753707 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" event={"ID":"828a8694-88d9-4658-909b-15188336b78b","Type":"ContainerStarted","Data":"ea8d040a796d59f1cc498ef423920903408174fbe9398af65f95d906be9b37b5"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.754893 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.756090 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" event={"ID":"e3e80c29-b107-4969-93d7-e305e1c7eaa2","Type":"ContainerStarted","Data":"f591c3525b41b1066bd678d9d4790bca2eca1225bbcb86260b0d2ef1df2928f5"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.759706 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.767231 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" event={"ID":"be55b657-228b-4eef-8047-1d4c2577c529","Type":"ContainerStarted","Data":"527dd53cecc3c4bfac52c9ce4cf0f0b71760bccff16b868ad11a823a2c5e5c08"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.768136 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.770129 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" event={"ID":"fb61b1d4-aeeb-4526-8515-4d647d61aa9e","Type":"ContainerStarted","Data":"a683452d0cb26ac043c4e1863bb7b31e4f2f2a02984eec45bacd598ff127f61d"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.770851 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.773929 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podUID="fb61b1d4-aeeb-4526-8515-4d647d61aa9e" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.777765 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" event={"ID":"82d2275a-c4c7-42a6-9027-cbbf12d0381f","Type":"ContainerStarted","Data":"e1586f2764dcbf9d24aac6f57bdd41fe11e66621d4c2841b385550eea8ed2e7b"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.777807 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" event={"ID":"82d2275a-c4c7-42a6-9027-cbbf12d0381f","Type":"ContainerStarted","Data":"d2c28c4f74a1cc3ceb071669e8af6a2356aef9db979e6e8d1d037dc163078899"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.778585 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.782043 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" event={"ID":"69b41a1e-5d38-4364-97bf-af19372d6324","Type":"ContainerStarted","Data":"5e6159c282dd853fc8e96e1b06f1613be9c9d4095cb5759822d37fa6f2ad7ac8"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.783686 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" event={"ID":"a12e358f-da5d-409b-b9d5-a91897588e65","Type":"ContainerStarted","Data":"1efab06e97ee358ab7f7c8f22aa47e366cb201c9cfd3d24fc8ea74992788649a"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.791958 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" event={"ID":"b89e44e5-1b68-4902-9a89-0b14489e1dfb","Type":"ContainerStarted","Data":"28bd133412f4baa4916af601c7dfa388e8628175f2955bf876c9f7f11643ab5e"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.800218 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" event={"ID":"95fa2571-c576-4132-b55a-cb1211301ce8","Type":"ContainerStarted","Data":"329b1fde0fa19abea878a175c35fdca178b77bb94be5459d9f7c7d3b85168eec"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.801104 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.811008 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.812949 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podUID="d37a4080-1835-47f1-bad0-040bcb647c80" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.813084 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" event={"ID":"9e49df96-9a55-4c5c-864f-cd1aada7db7a","Type":"ContainerStarted","Data":"7e82990b3e1d46a331d36b2337d14c71185fbc1702368f3851e3c9a7f1b008d3"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.814583 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" event={"ID":"8553cda1-13f9-4f6f-b301-0f757fbf0021","Type":"ContainerStarted","Data":"fdf5169f28dae6da35abe84050cf0326a4d73b989e98e570cb2f09dad4d31bf7"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.815513 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.826793 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-7cmxk" podStartSLOduration=4.842231969 podStartE2EDuration="50.826771813s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.097264855 +0000 UTC m=+985.059801811" lastFinishedPulling="2025-12-04 15:53:07.081804699 +0000 UTC m=+1031.044341655" observedRunningTime="2025-12-04 15:53:08.823715705 +0000 UTC m=+1032.786252661" watchObservedRunningTime="2025-12-04 15:53:08.826771813 +0000 UTC m=+1032.789308769" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.830222 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.860536 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" event={"ID":"8b665720-1363-4671-8211-b91712e627df","Type":"ContainerStarted","Data":"0d12c481a76623b6c8c288a83de3541a8c216e3c080bd5718403403dda9ae74b"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.879184 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" event={"ID":"1c586b36-c4f0-4de4-8616-ed14769e76a1","Type":"ContainerStarted","Data":"41ceee0cdd58ec433f93bd24d1a8f4d91163d8fe63868a848ad5d35e14f4b2bb"} Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.906406 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" event={"ID":"cd7d361b-7311-4d32-aaae-21ba66a40d69","Type":"ContainerStarted","Data":"1f0c29b233156310c09745899a71441b725491b0d0872a2225a7f28e442a0cf7"} Dec 04 15:53:08 crc kubenswrapper[4878]: E1204 15:53:08.917257 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podUID="cd7d361b-7311-4d32-aaae-21ba66a40d69" Dec 04 15:53:08 crc kubenswrapper[4878]: I1204 15:53:08.932510 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" event={"ID":"1880d469-6774-4848-9df9-31bfd93bc699","Type":"ContainerStarted","Data":"0df6752ac774dcedcca103d4204977b392c705b7db94fcb7ed48c660b89a3c96"} Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:08.964852 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-8wvf6" podStartSLOduration=4.760449525 podStartE2EDuration="50.964830004s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.964280752 +0000 UTC m=+984.926817698" lastFinishedPulling="2025-12-04 15:53:07.168661221 +0000 UTC m=+1031.131198177" observedRunningTime="2025-12-04 15:53:08.919266529 +0000 UTC m=+1032.881803485" watchObservedRunningTime="2025-12-04 15:53:08.964830004 +0000 UTC m=+1032.927366960" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:08.984264 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" event={"ID":"f925d486-d890-44dc-a416-d976e8b7d188","Type":"ContainerStarted","Data":"15d1575b842ec61351b4a7475a7f126755a51383f457b3b53ecb525577f93a4a"} Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:08.985255 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:09.068855 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:09.336766 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" podStartSLOduration=44.848154151 podStartE2EDuration="51.336748969s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:53:00.37282188 +0000 UTC m=+1024.335358836" lastFinishedPulling="2025-12-04 15:53:06.861416698 +0000 UTC m=+1030.823953654" observedRunningTime="2025-12-04 15:53:09.334221055 +0000 UTC m=+1033.296758011" watchObservedRunningTime="2025-12-04 15:53:09.336748969 +0000 UTC m=+1033.299285925" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:09.407722 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-tbkt6" podStartSLOduration=4.893589346 podStartE2EDuration="52.407700739s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:19.563063598 +0000 UTC m=+983.525600554" lastFinishedPulling="2025-12-04 15:53:07.077174991 +0000 UTC m=+1031.039711947" observedRunningTime="2025-12-04 15:53:09.378718034 +0000 UTC m=+1033.341254980" watchObservedRunningTime="2025-12-04 15:53:09.407700739 +0000 UTC m=+1033.370237695" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:09.408240 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-ghx29" podStartSLOduration=5.772938062 podStartE2EDuration="52.408236013s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.440480655 +0000 UTC m=+984.403017611" lastFinishedPulling="2025-12-04 15:53:07.075778606 +0000 UTC m=+1031.038315562" observedRunningTime="2025-12-04 15:53:09.404319373 +0000 UTC m=+1033.366856329" watchObservedRunningTime="2025-12-04 15:53:09.408236013 +0000 UTC m=+1033.370772959" Dec 04 15:53:09 crc kubenswrapper[4878]: I1204 15:53:09.610175 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-nbqqp" podStartSLOduration=5.6245722350000005 podStartE2EDuration="51.610155925s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.09112891 +0000 UTC m=+985.053665866" lastFinishedPulling="2025-12-04 15:53:07.07671261 +0000 UTC m=+1031.039249556" observedRunningTime="2025-12-04 15:53:09.60406237 +0000 UTC m=+1033.566599326" watchObservedRunningTime="2025-12-04 15:53:09.610155925 +0000 UTC m=+1033.572692881" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.083162 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" event={"ID":"504d742f-8fe2-4006-b94e-bea669f69743","Type":"ContainerStarted","Data":"a4a87440fa8b3dc180790082bcf61de4e0553f3faa9e76cfb7c22a8c3a0cecaf"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.084915 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.124795 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" event={"ID":"a12e358f-da5d-409b-b9d5-a91897588e65","Type":"ContainerStarted","Data":"7067aa7dbd14ae9c8f48dc91cdd59902eda685dc12f584f8c7e97bd7f9c5a413"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.126313 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.129270 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" event={"ID":"a4b2d922-f684-4b6f-93dc-f717d2ece304","Type":"ContainerStarted","Data":"e9ed9d342ddf3ef7f0732a6dde22c1b3241b1af9c93a788be30113fa34efd13e"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.130234 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.139069 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" event={"ID":"80bb52cf-c5dd-40ef-b4bf-657d731ad9bc","Type":"ContainerStarted","Data":"1c7e75c719be963199758b0a6cced3613846abba9fdb2e1b3f7ca66a8ee87427"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.139349 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.141628 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" event={"ID":"d37a4080-1835-47f1-bad0-040bcb647c80","Type":"ContainerStarted","Data":"a071fbe66c18cc029fac19d9b8464b134da4ffbc37890bc5e299bbd51e179d8a"} Dec 04 15:53:10 crc kubenswrapper[4878]: E1204 15:53:10.143737 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podUID="d37a4080-1835-47f1-bad0-040bcb647c80" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.145390 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" event={"ID":"9e49df96-9a55-4c5c-864f-cd1aada7db7a","Type":"ContainerStarted","Data":"4f086c11a0362183f18a5a372904934660775495706adefc0433c3192d5f8484"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.146180 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.167370 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" event={"ID":"69b41a1e-5d38-4364-97bf-af19372d6324","Type":"ContainerStarted","Data":"5d6bced616109624fb4c19538e42636acdae99541eabb860a198a881736bc28f"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.168997 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.188036 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" event={"ID":"4b7ee068-250c-4674-8ec2-60dd5c0419be","Type":"ContainerStarted","Data":"ace3c6a1a42b7dacf25f70b119b7af0c4e561150c05ba3918587d05fe6948336"} Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.228102 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" podStartSLOduration=5.703365837 podStartE2EDuration="53.22807996s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.962093176 +0000 UTC m=+984.924630132" lastFinishedPulling="2025-12-04 15:53:08.486807299 +0000 UTC m=+1032.449344255" observedRunningTime="2025-12-04 15:53:10.219096782 +0000 UTC m=+1034.181633738" watchObservedRunningTime="2025-12-04 15:53:10.22807996 +0000 UTC m=+1034.190616906" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.352894 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" podStartSLOduration=6.459991822 podStartE2EDuration="53.352854225s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.242189795 +0000 UTC m=+984.204726751" lastFinishedPulling="2025-12-04 15:53:07.135052198 +0000 UTC m=+1031.097589154" observedRunningTime="2025-12-04 15:53:10.269549092 +0000 UTC m=+1034.232086048" watchObservedRunningTime="2025-12-04 15:53:10.352854225 +0000 UTC m=+1034.315391181" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.553060 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" podStartSLOduration=47.154404968 podStartE2EDuration="53.553040013s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:53:00.346812121 +0000 UTC m=+1024.309349077" lastFinishedPulling="2025-12-04 15:53:06.745447166 +0000 UTC m=+1030.707984122" observedRunningTime="2025-12-04 15:53:10.547068182 +0000 UTC m=+1034.509605158" watchObservedRunningTime="2025-12-04 15:53:10.553040013 +0000 UTC m=+1034.515576969" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.591482 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" podStartSLOduration=4.871472346 podStartE2EDuration="53.591460098s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:19.563465168 +0000 UTC m=+983.526002124" lastFinishedPulling="2025-12-04 15:53:08.28345291 +0000 UTC m=+1032.245989876" observedRunningTime="2025-12-04 15:53:10.575006281 +0000 UTC m=+1034.537543247" watchObservedRunningTime="2025-12-04 15:53:10.591460098 +0000 UTC m=+1034.553997054" Dec 04 15:53:10 crc kubenswrapper[4878]: I1204 15:53:10.623022 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" podStartSLOduration=4.312139954 podStartE2EDuration="52.622996368s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.091845168 +0000 UTC m=+985.054382134" lastFinishedPulling="2025-12-04 15:53:09.402701592 +0000 UTC m=+1033.365238548" observedRunningTime="2025-12-04 15:53:10.611803274 +0000 UTC m=+1034.574340230" watchObservedRunningTime="2025-12-04 15:53:10.622996368 +0000 UTC m=+1034.585533334" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.107631 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5f86dd88bc-blw62" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.143337 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" podStartSLOduration=5.577903191 podStartE2EDuration="53.143317606s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.091595221 +0000 UTC m=+985.054132177" lastFinishedPulling="2025-12-04 15:53:08.657009646 +0000 UTC m=+1032.619546592" observedRunningTime="2025-12-04 15:53:10.652185348 +0000 UTC m=+1034.614722304" watchObservedRunningTime="2025-12-04 15:53:11.143317606 +0000 UTC m=+1035.105854562" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.243710 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" event={"ID":"8b665720-1363-4671-8211-b91712e627df","Type":"ContainerStarted","Data":"a3a53aa5744cbaed2dd5b48e60fdd542c64df320dc601d406fc0960e78220c07"} Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.243950 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.250986 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" event={"ID":"1c586b36-c4f0-4de4-8616-ed14769e76a1","Type":"ContainerStarted","Data":"0cb2b810c28dc22284a3a206ebc79b2ac0b405d7897034975d844f8cff1f61c7"} Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.251525 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.256216 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" event={"ID":"1880d469-6774-4848-9df9-31bfd93bc699","Type":"ContainerStarted","Data":"1862cf3eba33951495603e187a296a9224bfc3981fd308acb7996dbe6521c36f"} Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.257042 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.258733 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" event={"ID":"b89e44e5-1b68-4902-9a89-0b14489e1dfb","Type":"ContainerStarted","Data":"757d86896557ec46335026fc27a0ffd176daeae912fbba81f84d2ad5e46ca1a6"} Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.259268 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.301944 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" event={"ID":"e3e80c29-b107-4969-93d7-e305e1c7eaa2","Type":"ContainerStarted","Data":"3fe69b9a36bee8e99781409cc13508c521db2d333479d5bf1e31c27c629f3068"} Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.301991 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.306853 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" podStartSLOduration=4.015764048 podStartE2EDuration="54.306830784s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.241913298 +0000 UTC m=+984.204450254" lastFinishedPulling="2025-12-04 15:53:10.532980034 +0000 UTC m=+1034.495516990" observedRunningTime="2025-12-04 15:53:11.270125413 +0000 UTC m=+1035.232662379" watchObservedRunningTime="2025-12-04 15:53:11.306830784 +0000 UTC m=+1035.269367740" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.427553 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" podStartSLOduration=4.994211395 podStartE2EDuration="53.427532006s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.094801303 +0000 UTC m=+985.057338259" lastFinishedPulling="2025-12-04 15:53:09.528121914 +0000 UTC m=+1033.490658870" observedRunningTime="2025-12-04 15:53:11.425038053 +0000 UTC m=+1035.387575019" watchObservedRunningTime="2025-12-04 15:53:11.427532006 +0000 UTC m=+1035.390068962" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.429803 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" podStartSLOduration=5.819275447 podStartE2EDuration="54.429790283s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.002164233 +0000 UTC m=+984.964701189" lastFinishedPulling="2025-12-04 15:53:09.612679069 +0000 UTC m=+1033.575216025" observedRunningTime="2025-12-04 15:53:11.305532531 +0000 UTC m=+1035.268069487" watchObservedRunningTime="2025-12-04 15:53:11.429790283 +0000 UTC m=+1035.392327239" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.459663 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" podStartSLOduration=5.66564757 podStartE2EDuration="54.459635541s" podCreationTimestamp="2025-12-04 15:52:17 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.092092224 +0000 UTC m=+985.054629180" lastFinishedPulling="2025-12-04 15:53:09.886080195 +0000 UTC m=+1033.848617151" observedRunningTime="2025-12-04 15:53:11.454915391 +0000 UTC m=+1035.417452377" watchObservedRunningTime="2025-12-04 15:53:11.459635541 +0000 UTC m=+1035.422172507" Dec 04 15:53:11 crc kubenswrapper[4878]: I1204 15:53:11.481629 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" podStartSLOduration=5.074908042 podStartE2EDuration="53.481606038s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.992621051 +0000 UTC m=+984.955157997" lastFinishedPulling="2025-12-04 15:53:09.399319037 +0000 UTC m=+1033.361855993" observedRunningTime="2025-12-04 15:53:11.477055422 +0000 UTC m=+1035.439592378" watchObservedRunningTime="2025-12-04 15:53:11.481606038 +0000 UTC m=+1035.444142994" Dec 04 15:53:12 crc kubenswrapper[4878]: E1204 15:53:12.236726 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podUID="f4bb7917-09ae-4b2a-95c1-172ff14e5771" Dec 04 15:53:12 crc kubenswrapper[4878]: I1204 15:53:12.310655 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" event={"ID":"4b7ee068-250c-4674-8ec2-60dd5c0419be","Type":"ContainerStarted","Data":"54bf1f38289f1ac8e8dbbf773949d19ccdce7a1a8ba480e8680642d1c28d856e"} Dec 04 15:53:12 crc kubenswrapper[4878]: I1204 15:53:12.314091 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:53:12 crc kubenswrapper[4878]: I1204 15:53:12.327362 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" podStartSLOduration=4.266936717 podStartE2EDuration="54.327345092s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:20.969759261 +0000 UTC m=+984.932296207" lastFinishedPulling="2025-12-04 15:53:11.030167626 +0000 UTC m=+1034.992704582" observedRunningTime="2025-12-04 15:53:12.324931301 +0000 UTC m=+1036.287468247" watchObservedRunningTime="2025-12-04 15:53:12.327345092 +0000 UTC m=+1036.289882038" Dec 04 15:53:13 crc kubenswrapper[4878]: I1204 15:53:13.164697 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-tm72f" Dec 04 15:53:14 crc kubenswrapper[4878]: I1204 15:53:14.065267 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-97pcj" Dec 04 15:53:14 crc kubenswrapper[4878]: I1204 15:53:14.778096 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.153625 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-d6d6c" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.342563 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-4bdd7" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.360731 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-spbj8" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.415900 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-bh7x6" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.599729 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-974fj" Dec 04 15:53:18 crc kubenswrapper[4878]: I1204 15:53:18.859773 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-lg8ds" Dec 04 15:53:19 crc kubenswrapper[4878]: I1204 15:53:19.097672 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-wrss6" Dec 04 15:53:19 crc kubenswrapper[4878]: I1204 15:53:19.199701 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-n8dqh" Dec 04 15:53:19 crc kubenswrapper[4878]: I1204 15:53:19.283514 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-2vjxr" Dec 04 15:53:19 crc kubenswrapper[4878]: I1204 15:53:19.294477 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-lmpm5" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.557229 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" event={"ID":"fb61b1d4-aeeb-4526-8515-4d647d61aa9e","Type":"ContainerStarted","Data":"f59c31554f8f9ab2cbb33344d9f4200a6275d190ec3bfa7e637c54ce75fa8707"} Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.560004 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" event={"ID":"cd7d361b-7311-4d32-aaae-21ba66a40d69","Type":"ContainerStarted","Data":"424d24e030fb3071daf268e4b338037a5d60da01efe3adbfd7ea61e2629b7332"} Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.560236 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" event={"ID":"f4bb7917-09ae-4b2a-95c1-172ff14e5771","Type":"ContainerStarted","Data":"61a7bee31b31d4a1c8c4cb06ebfdbb5003a3d6fdd54baa414e18784dd733d0e2"} Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.561511 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.562423 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.569029 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" event={"ID":"d37a4080-1835-47f1-bad0-040bcb647c80","Type":"ContainerStarted","Data":"23433018fd3f9c62b65e5bdbfdfaa90622420ddbe23bb2f4d0b3587854f41dfe"} Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.569446 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.591104 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-czgfc" podStartSLOduration=3.58272098 podStartE2EDuration="1m9.591073679s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.094840474 +0000 UTC m=+985.057377430" lastFinishedPulling="2025-12-04 15:53:27.103193173 +0000 UTC m=+1051.065730129" observedRunningTime="2025-12-04 15:53:27.585755044 +0000 UTC m=+1051.548292020" watchObservedRunningTime="2025-12-04 15:53:27.591073679 +0000 UTC m=+1051.553610635" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.610518 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" podStartSLOduration=4.091368552 podStartE2EDuration="1m9.610496392s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.09708138 +0000 UTC m=+985.059618336" lastFinishedPulling="2025-12-04 15:53:26.61620922 +0000 UTC m=+1050.578746176" observedRunningTime="2025-12-04 15:53:27.608190944 +0000 UTC m=+1051.570727900" watchObservedRunningTime="2025-12-04 15:53:27.610496392 +0000 UTC m=+1051.573033348" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.637995 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" podStartSLOduration=4.191710158 podStartE2EDuration="1m9.637974629s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.095122681 +0000 UTC m=+985.057659637" lastFinishedPulling="2025-12-04 15:53:26.541387152 +0000 UTC m=+1050.503924108" observedRunningTime="2025-12-04 15:53:27.635727182 +0000 UTC m=+1051.598264148" watchObservedRunningTime="2025-12-04 15:53:27.637974629 +0000 UTC m=+1051.600511585" Dec 04 15:53:27 crc kubenswrapper[4878]: I1204 15:53:27.656805 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" podStartSLOduration=4.222386777 podStartE2EDuration="1m9.656765606s" podCreationTimestamp="2025-12-04 15:52:18 +0000 UTC" firstStartedPulling="2025-12-04 15:52:21.101756589 +0000 UTC m=+985.064293545" lastFinishedPulling="2025-12-04 15:53:26.536135418 +0000 UTC m=+1050.498672374" observedRunningTime="2025-12-04 15:53:27.6557549 +0000 UTC m=+1051.618291856" watchObservedRunningTime="2025-12-04 15:53:27.656765606 +0000 UTC m=+1051.619302562" Dec 04 15:53:30 crc kubenswrapper[4878]: I1204 15:53:30.840975 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:53:30 crc kubenswrapper[4878]: I1204 15:53:30.841560 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:53:30 crc kubenswrapper[4878]: I1204 15:53:30.841611 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:53:30 crc kubenswrapper[4878]: I1204 15:53:30.842383 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:53:30 crc kubenswrapper[4878]: I1204 15:53:30.842445 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e" gracePeriod=600 Dec 04 15:53:31 crc kubenswrapper[4878]: I1204 15:53:31.597724 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e" exitCode=0 Dec 04 15:53:31 crc kubenswrapper[4878]: I1204 15:53:31.597803 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e"} Dec 04 15:53:31 crc kubenswrapper[4878]: I1204 15:53:31.598263 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a"} Dec 04 15:53:31 crc kubenswrapper[4878]: I1204 15:53:31.598300 4878 scope.go:117] "RemoveContainer" containerID="24186795437d00a19bfab5413d9cd89c8f17b821e40eb4736dd5bfc921c524ca" Dec 04 15:53:38 crc kubenswrapper[4878]: I1204 15:53:38.804165 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-hms78" Dec 04 15:53:38 crc kubenswrapper[4878]: I1204 15:53:38.811859 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-cq22h" Dec 04 15:53:39 crc kubenswrapper[4878]: I1204 15:53:39.290948 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-cnxd4" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.454193 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455040 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="extract-utilities" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455054 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="extract-utilities" Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455107 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="extract-utilities" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455116 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="extract-utilities" Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455142 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="extract-content" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455151 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="extract-content" Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455160 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="extract-content" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455173 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="extract-content" Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455191 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455199 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: E1204 15:53:57.455212 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455219 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455361 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aea9c4f-931c-4440-aabc-25d52f240f08" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.455374 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76d16fb-56b7-4c86-9482-c91990583c80" containerName="registry-server" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.456194 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.469419 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.469943 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.470095 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.470276 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-75kdv" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.483629 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.523712 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.525601 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.529071 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.529713 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.579281 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p244s\" (UniqueName: \"kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.579437 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.681641 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.681899 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.683197 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9ts2\" (UniqueName: \"kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.682996 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.683630 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p244s\" (UniqueName: \"kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.683688 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.729979 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p244s\" (UniqueName: \"kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s\") pod \"dnsmasq-dns-675f4bcbfc-kkbf6\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.785742 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9ts2\" (UniqueName: \"kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.786082 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.787001 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.787150 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.787773 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.794399 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.812130 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9ts2\" (UniqueName: \"kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2\") pod \"dnsmasq-dns-78dd6ddcc-nczv9\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:57 crc kubenswrapper[4878]: I1204 15:53:57.848106 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:53:58 crc kubenswrapper[4878]: I1204 15:53:58.334829 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:53:58 crc kubenswrapper[4878]: I1204 15:53:58.807045 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:53:58 crc kubenswrapper[4878]: W1204 15:53:58.813645 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe8fdcec_2eb3_4c8e_b8db_3e963e302016.slice/crio-c56fa1658bb8aa84aaced55a0928ac21180b7fcddab4edbd70c6d3968cf28af5 WatchSource:0}: Error finding container c56fa1658bb8aa84aaced55a0928ac21180b7fcddab4edbd70c6d3968cf28af5: Status 404 returned error can't find the container with id c56fa1658bb8aa84aaced55a0928ac21180b7fcddab4edbd70c6d3968cf28af5 Dec 04 15:53:58 crc kubenswrapper[4878]: I1204 15:53:58.830372 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" event={"ID":"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70","Type":"ContainerStarted","Data":"6226125e07c6a748a01f22eac1a7bad80b9c9cc1a7b03094e813954c5dd91062"} Dec 04 15:53:58 crc kubenswrapper[4878]: I1204 15:53:58.832772 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" event={"ID":"be8fdcec-2eb3-4c8e-b8db-3e963e302016","Type":"ContainerStarted","Data":"c56fa1658bb8aa84aaced55a0928ac21180b7fcddab4edbd70c6d3968cf28af5"} Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.011068 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.034196 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.037027 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.049834 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.199574 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.199646 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.199739 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7mtq\" (UniqueName: \"kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.301757 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7mtq\" (UniqueName: \"kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.307495 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.307643 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.309264 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.310761 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.335161 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7mtq\" (UniqueName: \"kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq\") pod \"dnsmasq-dns-666b6646f7-xsg7s\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.368181 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.406493 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.456647 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.458206 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.484358 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.611634 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.611699 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zzh\" (UniqueName: \"kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.611934 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.714227 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.714301 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zzh\" (UniqueName: \"kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.714360 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.715370 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.715472 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.732353 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zzh\" (UniqueName: \"kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh\") pod \"dnsmasq-dns-57d769cc4f-49lsz\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:00 crc kubenswrapper[4878]: I1204 15:54:00.787344 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.264274 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.265897 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.268285 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.268502 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.268503 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.268639 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.275866 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rbjnw" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.275866 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.279541 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.284994 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.342798 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384503 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384562 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384600 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384630 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384652 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384680 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384716 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384753 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384797 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384835 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.384944 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2v6d\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: W1204 15:54:01.412502 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e6f3b9b_dba6_4d5f_b138_1c169fb069fc.slice/crio-c0a714f201fd6195e0897d2152b1905848cc55e917ad7ee0f02286399d3da60e WatchSource:0}: Error finding container c0a714f201fd6195e0897d2152b1905848cc55e917ad7ee0f02286399d3da60e: Status 404 returned error can't find the container with id c0a714f201fd6195e0897d2152b1905848cc55e917ad7ee0f02286399d3da60e Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486196 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486295 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2v6d\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486730 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486761 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486793 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486818 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486838 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486872 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486922 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486952 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.486987 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.487344 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.488019 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.488106 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.488402 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.489281 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.490217 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.494731 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.495098 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.502518 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.505750 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.507030 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2v6d\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.523621 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.598542 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.600565 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.603266 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.607567 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.607816 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.608047 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.608348 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n7tw8" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.608566 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.608720 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.610367 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.620283 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.676064 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:01 crc kubenswrapper[4878]: W1204 15:54:01.676569 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3469ba31_9f3d_444f_803b_87b26533a34a.slice/crio-02897c99779600c5ddc3dfece2539ff4344e64f2e268e31a3e2f25d951e2f79a WatchSource:0}: Error finding container 02897c99779600c5ddc3dfece2539ff4344e64f2e268e31a3e2f25d951e2f79a: Status 404 returned error can't find the container with id 02897c99779600c5ddc3dfece2539ff4344e64f2e268e31a3e2f25d951e2f79a Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794421 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794523 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794616 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794638 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794669 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqnr\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794739 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.794949 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.795021 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.795172 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.795298 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.795430 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897238 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897535 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897554 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqnr\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897582 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897606 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897630 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897669 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897703 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897744 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897769 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.897790 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.898790 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.899419 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.899770 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.903631 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.903639 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.904200 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.907507 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.907937 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.908226 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.909962 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.927354 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqnr\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.940092 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.989188 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" event={"ID":"3469ba31-9f3d-444f-803b-87b26533a34a","Type":"ContainerStarted","Data":"02897c99779600c5ddc3dfece2539ff4344e64f2e268e31a3e2f25d951e2f79a"} Dec 04 15:54:01 crc kubenswrapper[4878]: I1204 15:54:01.990805 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" event={"ID":"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc","Type":"ContainerStarted","Data":"c0a714f201fd6195e0897d2152b1905848cc55e917ad7ee0f02286399d3da60e"} Dec 04 15:54:02 crc kubenswrapper[4878]: I1204 15:54:02.236507 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:54:02 crc kubenswrapper[4878]: I1204 15:54:02.272191 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.031829 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerStarted","Data":"cc9fe43b48f3a5aff6c14be35ab3fd86c87a35d8a4bdabe9db6282860e0c1095"} Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.258868 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.270601 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.270748 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.287688 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.289046 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.290265 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.291640 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-njwwt" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.324576 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.339066 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.442016 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.442148 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.442387 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-kolla-config\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.442656 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.442849 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3463d397-8d58-4444-ac34-52a0597ca441-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.443069 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-config-data-default\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.443100 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjn8w\" (UniqueName: \"kubernetes.io/projected/3463d397-8d58-4444-ac34-52a0597ca441-kube-api-access-tjn8w\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.443146 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545253 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545335 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545381 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545462 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-kolla-config\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545510 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545540 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3463d397-8d58-4444-ac34-52a0597ca441-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545609 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-config-data-default\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.545634 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjn8w\" (UniqueName: \"kubernetes.io/projected/3463d397-8d58-4444-ac34-52a0597ca441-kube-api-access-tjn8w\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.546503 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.547262 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3463d397-8d58-4444-ac34-52a0597ca441-config-data-generated\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.548613 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-kolla-config\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.548620 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-operator-scripts\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.551329 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3463d397-8d58-4444-ac34-52a0597ca441-config-data-default\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.559559 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.560665 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/3463d397-8d58-4444-ac34-52a0597ca441-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.570131 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjn8w\" (UniqueName: \"kubernetes.io/projected/3463d397-8d58-4444-ac34-52a0597ca441-kube-api-access-tjn8w\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.658254 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"3463d397-8d58-4444-ac34-52a0597ca441\") " pod="openstack/openstack-galera-0" Dec 04 15:54:03 crc kubenswrapper[4878]: I1204 15:54:03.909036 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.054063 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerStarted","Data":"8a4d08a36e64c2d8389be5d832bc44f66ebfe38611218cdd07d13a5c21a09591"} Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.588169 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.591323 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.595269 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-5ch62" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.595482 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.595610 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.595645 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.600816 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771014 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771071 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771133 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78gnm\" (UniqueName: \"kubernetes.io/projected/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kube-api-access-78gnm\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771206 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771232 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771269 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771391 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.771459 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.799695 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873562 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873632 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78gnm\" (UniqueName: \"kubernetes.io/projected/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kube-api-access-78gnm\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873660 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873675 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873698 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873731 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.873779 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.874764 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.876205 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.877100 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.878745 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.879411 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.881912 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.887094 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.905960 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78gnm\" (UniqueName: \"kubernetes.io/projected/9d775ef8-4c79-4ce4-b5bd-9d3290fb3256-kube-api-access-78gnm\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.918445 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256\") " pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.946417 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.947668 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.950663 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.950971 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-gpdrr" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.951687 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 04 15:54:04 crc kubenswrapper[4878]: I1204 15:54:04.953787 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.109938 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.109985 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-kolla-config\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.110020 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.110080 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z276z\" (UniqueName: \"kubernetes.io/projected/065cfa24-566e-4cb0-8827-acbc50620fee-kube-api-access-z276z\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.110109 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-config-data\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.124362 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3463d397-8d58-4444-ac34-52a0597ca441","Type":"ContainerStarted","Data":"1c1452db013523cbd6d0ff224d45b15250c22ea89579dd00660a38cc1fc8e095"} Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.213083 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z276z\" (UniqueName: \"kubernetes.io/projected/065cfa24-566e-4cb0-8827-acbc50620fee-kube-api-access-z276z\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.213150 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-config-data\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.213220 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.213243 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-kolla-config\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.213266 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.214611 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-config-data\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.214672 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/065cfa24-566e-4cb0-8827-acbc50620fee-kolla-config\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.216425 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.223782 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-memcached-tls-certs\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.224183 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/065cfa24-566e-4cb0-8827-acbc50620fee-combined-ca-bundle\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.229429 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z276z\" (UniqueName: \"kubernetes.io/projected/065cfa24-566e-4cb0-8827-acbc50620fee-kube-api-access-z276z\") pod \"memcached-0\" (UID: \"065cfa24-566e-4cb0-8827-acbc50620fee\") " pod="openstack/memcached-0" Dec 04 15:54:05 crc kubenswrapper[4878]: I1204 15:54:05.293637 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 04 15:54:06 crc kubenswrapper[4878]: I1204 15:54:06.830954 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 04 15:54:06 crc kubenswrapper[4878]: I1204 15:54:06.882173 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 04 15:54:06 crc kubenswrapper[4878]: W1204 15:54:06.892577 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod065cfa24_566e_4cb0_8827_acbc50620fee.slice/crio-2ef2ea471e80a77179b51d2035e31c892fa2878b04bdf3791d8f59ef4a0eeb67 WatchSource:0}: Error finding container 2ef2ea471e80a77179b51d2035e31c892fa2878b04bdf3791d8f59ef4a0eeb67: Status 404 returned error can't find the container with id 2ef2ea471e80a77179b51d2035e31c892fa2878b04bdf3791d8f59ef4a0eeb67 Dec 04 15:54:06 crc kubenswrapper[4878]: I1204 15:54:06.989150 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:54:06 crc kubenswrapper[4878]: I1204 15:54:06.990539 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:54:06 crc kubenswrapper[4878]: I1204 15:54:06.999371 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-4tt2n" Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.003764 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.189511 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzw8k\" (UniqueName: \"kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k\") pod \"kube-state-metrics-0\" (UID: \"c38cb4c2-8301-4304-a2a8-beed07ff5c49\") " pod="openstack/kube-state-metrics-0" Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.213114 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256","Type":"ContainerStarted","Data":"b25511a73a52867ae13b73978ceff8ff9b4a9d8cdab4453a115d220381313d95"} Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.215173 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"065cfa24-566e-4cb0-8827-acbc50620fee","Type":"ContainerStarted","Data":"2ef2ea471e80a77179b51d2035e31c892fa2878b04bdf3791d8f59ef4a0eeb67"} Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.291488 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzw8k\" (UniqueName: \"kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k\") pod \"kube-state-metrics-0\" (UID: \"c38cb4c2-8301-4304-a2a8-beed07ff5c49\") " pod="openstack/kube-state-metrics-0" Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.318173 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzw8k\" (UniqueName: \"kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k\") pod \"kube-state-metrics-0\" (UID: \"c38cb4c2-8301-4304-a2a8-beed07ff5c49\") " pod="openstack/kube-state-metrics-0" Dec 04 15:54:07 crc kubenswrapper[4878]: I1204 15:54:07.327777 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:54:08 crc kubenswrapper[4878]: I1204 15:54:08.344902 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:54:09 crc kubenswrapper[4878]: I1204 15:54:09.974274 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qt5xl"] Dec 04 15:54:09 crc kubenswrapper[4878]: I1204 15:54:09.975660 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:09 crc kubenswrapper[4878]: I1204 15:54:09.979366 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-v8jcw" Dec 04 15:54:09 crc kubenswrapper[4878]: I1204 15:54:09.990091 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 04 15:54:09 crc kubenswrapper[4878]: I1204 15:54:09.990251 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.004664 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-cvfgn"] Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.006497 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.014875 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qt5xl"] Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098689 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4xfh\" (UniqueName: \"kubernetes.io/projected/5efdf6a5-f2e2-4839-a976-39d5104d7d83-kube-api-access-q4xfh\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098759 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-688sk\" (UniqueName: \"kubernetes.io/projected/76972b0d-60b4-427a-83fa-69d53c8c1e64-kube-api-access-688sk\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098821 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-ovn-controller-tls-certs\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098849 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76972b0d-60b4-427a-83fa-69d53c8c1e64-scripts\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098903 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-run\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098934 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-etc-ovs\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.098961 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099015 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5efdf6a5-f2e2-4839-a976-39d5104d7d83-scripts\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099046 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-lib\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099075 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099102 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-log-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099128 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-combined-ca-bundle\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.099166 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-log\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.149436 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cvfgn"] Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209783 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209873 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5efdf6a5-f2e2-4839-a976-39d5104d7d83-scripts\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209913 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-lib\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209977 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-log-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.209993 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-combined-ca-bundle\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.210523 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.210599 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-log\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.210626 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-lib\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.210766 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-log-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211365 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-log\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211419 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4xfh\" (UniqueName: \"kubernetes.io/projected/5efdf6a5-f2e2-4839-a976-39d5104d7d83-kube-api-access-q4xfh\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211444 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-688sk\" (UniqueName: \"kubernetes.io/projected/76972b0d-60b4-427a-83fa-69d53c8c1e64-kube-api-access-688sk\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211489 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-ovn-controller-tls-certs\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211510 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76972b0d-60b4-427a-83fa-69d53c8c1e64-scripts\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211541 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-run\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211563 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-etc-ovs\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.211709 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-etc-ovs\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.212335 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5efdf6a5-f2e2-4839-a976-39d5104d7d83-scripts\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.213067 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5efdf6a5-f2e2-4839-a976-39d5104d7d83-var-run\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.221795 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76972b0d-60b4-427a-83fa-69d53c8c1e64-scripts\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.222070 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/76972b0d-60b4-427a-83fa-69d53c8c1e64-var-run-ovn\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.314791 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-ovn-controller-tls-certs\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.319629 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76972b0d-60b4-427a-83fa-69d53c8c1e64-combined-ca-bundle\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.319850 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-688sk\" (UniqueName: \"kubernetes.io/projected/76972b0d-60b4-427a-83fa-69d53c8c1e64-kube-api-access-688sk\") pod \"ovn-controller-qt5xl\" (UID: \"76972b0d-60b4-427a-83fa-69d53c8c1e64\") " pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.326977 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4xfh\" (UniqueName: \"kubernetes.io/projected/5efdf6a5-f2e2-4839-a976-39d5104d7d83-kube-api-access-q4xfh\") pod \"ovn-controller-ovs-cvfgn\" (UID: \"5efdf6a5-f2e2-4839-a976-39d5104d7d83\") " pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.336361 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:10 crc kubenswrapper[4878]: I1204 15:54:10.614853 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.909843 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.911739 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.915424 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.915587 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.916582 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.917227 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.917259 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-czwbv" Dec 04 15:54:12 crc kubenswrapper[4878]: I1204 15:54:12.932091 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082316 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92l4m\" (UniqueName: \"kubernetes.io/projected/ab2faf38-cbc6-4141-8553-58bad8a0675f-kube-api-access-92l4m\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082369 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082554 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082624 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082718 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082785 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082855 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.082970 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183766 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92l4m\" (UniqueName: \"kubernetes.io/projected/ab2faf38-cbc6-4141-8553-58bad8a0675f-kube-api-access-92l4m\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183821 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183853 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183899 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.183972 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.184012 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.184055 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.185000 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.187788 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.189849 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab2faf38-cbc6-4141-8553-58bad8a0675f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.190007 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.190270 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.190270 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.204570 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab2faf38-cbc6-4141-8553-58bad8a0675f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.207469 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92l4m\" (UniqueName: \"kubernetes.io/projected/ab2faf38-cbc6-4141-8553-58bad8a0675f-kube-api-access-92l4m\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.212691 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"ab2faf38-cbc6-4141-8553-58bad8a0675f\") " pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:13 crc kubenswrapper[4878]: I1204 15:54:13.232726 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: W1204 15:54:14.372449 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38cb4c2_8301_4304_a2a8_beed07ff5c49.slice/crio-7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e WatchSource:0}: Error finding container 7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e: Status 404 returned error can't find the container with id 7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.387962 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38cb4c2-8301-4304-a2a8-beed07ff5c49","Type":"ContainerStarted","Data":"7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e"} Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.745624 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.748678 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.751571 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.753074 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.753999 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nwpg7" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.755448 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.764487 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltvlt\" (UniqueName: \"kubernetes.io/projected/d7834dc6-68d7-4afb-bbcd-d247294ba85b-kube-api-access-ltvlt\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922424 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922451 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922480 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922501 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922522 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922542 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:14 crc kubenswrapper[4878]: I1204 15:54:14.922573 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024414 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024534 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltvlt\" (UniqueName: \"kubernetes.io/projected/d7834dc6-68d7-4afb-bbcd-d247294ba85b-kube-api-access-ltvlt\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024586 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024609 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024640 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024665 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024684 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.024706 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.025916 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-config\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.026243 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.027125 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7834dc6-68d7-4afb-bbcd-d247294ba85b-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.027294 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.104746 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.127900 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.129364 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltvlt\" (UniqueName: \"kubernetes.io/projected/d7834dc6-68d7-4afb-bbcd-d247294ba85b-kube-api-access-ltvlt\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.174728 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7834dc6-68d7-4afb-bbcd-d247294ba85b-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.181380 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d7834dc6-68d7-4afb-bbcd-d247294ba85b\") " pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:15 crc kubenswrapper[4878]: I1204 15:54:15.415517 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:25 crc kubenswrapper[4878]: E1204 15:54:25.773053 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 04 15:54:25 crc kubenswrapper[4878]: E1204 15:54:25.773793 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k2v6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f17e1868-a868-47aa-8e98-e60203d8295f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:25 crc kubenswrapper[4878]: E1204 15:54:25.775057 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.395342 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.395758 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rqnr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(2b85c4bb-73ad-4002-85b3-46a1f83cd326): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.397070 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.412043 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.412278 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9ts2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-nczv9_openstack(be8fdcec-2eb3-4c8e-b8db-3e963e302016): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.413497 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" podUID="be8fdcec-2eb3-4c8e-b8db-3e963e302016" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.424300 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.424626 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l9zzh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-49lsz_openstack(3469ba31-9f3d-444f-803b-87b26533a34a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.425811 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.446717 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.446932 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p244s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-kkbf6_openstack(543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.448313 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" podUID="543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.450575 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.450771 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b7mtq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-xsg7s_openstack(6e6f3b9b-dba6-4d5f-b138-1c169fb069fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.452514 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" podUID="6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.576415 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" podUID="6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.576624 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.576729 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" Dec 04 15:54:26 crc kubenswrapper[4878]: E1204 15:54:26.579564 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.049151 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.064164 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.230305 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config\") pod \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.230386 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc\") pod \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.230429 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config\") pod \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.230526 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p244s\" (UniqueName: \"kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s\") pod \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\" (UID: \"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70\") " Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.230565 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9ts2\" (UniqueName: \"kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2\") pod \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\" (UID: \"be8fdcec-2eb3-4c8e-b8db-3e963e302016\") " Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.231171 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config" (OuterVolumeSpecName: "config") pod "be8fdcec-2eb3-4c8e-b8db-3e963e302016" (UID: "be8fdcec-2eb3-4c8e-b8db-3e963e302016"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.231223 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be8fdcec-2eb3-4c8e-b8db-3e963e302016" (UID: "be8fdcec-2eb3-4c8e-b8db-3e963e302016"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.231753 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config" (OuterVolumeSpecName: "config") pod "543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70" (UID: "543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.236949 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s" (OuterVolumeSpecName: "kube-api-access-p244s") pod "543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70" (UID: "543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70"). InnerVolumeSpecName "kube-api-access-p244s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.237014 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2" (OuterVolumeSpecName: "kube-api-access-v9ts2") pod "be8fdcec-2eb3-4c8e-b8db-3e963e302016" (UID: "be8fdcec-2eb3-4c8e-b8db-3e963e302016"). InnerVolumeSpecName "kube-api-access-v9ts2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.332991 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.333342 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.333358 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p244s\" (UniqueName: \"kubernetes.io/projected/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70-kube-api-access-p244s\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.333374 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9ts2\" (UniqueName: \"kubernetes.io/projected/be8fdcec-2eb3-4c8e-b8db-3e963e302016-kube-api-access-v9ts2\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.333386 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be8fdcec-2eb3-4c8e-b8db-3e963e302016-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.558599 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-cvfgn"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.594224 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qt5xl"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.598943 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" event={"ID":"543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70","Type":"ContainerDied","Data":"6226125e07c6a748a01f22eac1a7bad80b9c9cc1a7b03094e813954c5dd91062"} Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.599057 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-kkbf6" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.604932 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.605008 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-nczv9" event={"ID":"be8fdcec-2eb3-4c8e-b8db-3e963e302016","Type":"ContainerDied","Data":"c56fa1658bb8aa84aaced55a0928ac21180b7fcddab4edbd70c6d3968cf28af5"} Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.610056 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3463d397-8d58-4444-ac34-52a0597ca441","Type":"ContainerStarted","Data":"4077375efed7aeaffa4e3123bc2f732b1355090f4d6ad5c7455e1437d76875b4"} Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.616182 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256","Type":"ContainerStarted","Data":"12ec806cb786179a124b12e29630ffdbc6f40585059fa84bbea534a39b0e045f"} Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.623725 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"065cfa24-566e-4cb0-8827-acbc50620fee","Type":"ContainerStarted","Data":"e0bc7ea48360ca75374f480b8fbf4edc2c9d46ded99469771fa232a29e7b6c3a"} Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.623985 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.683061 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.702955 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-nczv9"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.718231 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.741221 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.754100 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-kkbf6"] Dec 04 15:54:29 crc kubenswrapper[4878]: I1204 15:54:29.760391 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.7503917429999998 podStartE2EDuration="25.760368608s" podCreationTimestamp="2025-12-04 15:54:04 +0000 UTC" firstStartedPulling="2025-12-04 15:54:06.8956595 +0000 UTC m=+1090.858196456" lastFinishedPulling="2025-12-04 15:54:28.905636365 +0000 UTC m=+1112.868173321" observedRunningTime="2025-12-04 15:54:29.748697314 +0000 UTC m=+1113.711234280" watchObservedRunningTime="2025-12-04 15:54:29.760368608 +0000 UTC m=+1113.722905564" Dec 04 15:54:29 crc kubenswrapper[4878]: W1204 15:54:29.780083 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5efdf6a5_f2e2_4839_a976_39d5104d7d83.slice/crio-72ab76f08c5c9c43ace187ae14e8f9c97e3c5d40fa914f2bf690b88b49dab303 WatchSource:0}: Error finding container 72ab76f08c5c9c43ace187ae14e8f9c97e3c5d40fa914f2bf690b88b49dab303: Status 404 returned error can't find the container with id 72ab76f08c5c9c43ace187ae14e8f9c97e3c5d40fa914f2bf690b88b49dab303 Dec 04 15:54:29 crc kubenswrapper[4878]: W1204 15:54:29.783117 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76972b0d_60b4_427a_83fa_69d53c8c1e64.slice/crio-a48b7fe1e68d348232f2e95d3f47e4f1d939fd45ad163f7021de6887ea246424 WatchSource:0}: Error finding container a48b7fe1e68d348232f2e95d3f47e4f1d939fd45ad163f7021de6887ea246424: Status 404 returned error can't find the container with id a48b7fe1e68d348232f2e95d3f47e4f1d939fd45ad163f7021de6887ea246424 Dec 04 15:54:29 crc kubenswrapper[4878]: W1204 15:54:29.783482 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2faf38_cbc6_4141_8553_58bad8a0675f.slice/crio-a6697a08002b849705f3d5cbbe0671cd56bb08d773496077d46dfde7617f7c52 WatchSource:0}: Error finding container a6697a08002b849705f3d5cbbe0671cd56bb08d773496077d46dfde7617f7c52: Status 404 returned error can't find the container with id a6697a08002b849705f3d5cbbe0671cd56bb08d773496077d46dfde7617f7c52 Dec 04 15:54:30 crc kubenswrapper[4878]: I1204 15:54:30.483975 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 04 15:54:30 crc kubenswrapper[4878]: I1204 15:54:30.633031 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab2faf38-cbc6-4141-8553-58bad8a0675f","Type":"ContainerStarted","Data":"a6697a08002b849705f3d5cbbe0671cd56bb08d773496077d46dfde7617f7c52"} Dec 04 15:54:30 crc kubenswrapper[4878]: I1204 15:54:30.634453 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl" event={"ID":"76972b0d-60b4-427a-83fa-69d53c8c1e64","Type":"ContainerStarted","Data":"a48b7fe1e68d348232f2e95d3f47e4f1d939fd45ad163f7021de6887ea246424"} Dec 04 15:54:30 crc kubenswrapper[4878]: I1204 15:54:30.635590 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cvfgn" event={"ID":"5efdf6a5-f2e2-4839-a976-39d5104d7d83","Type":"ContainerStarted","Data":"72ab76f08c5c9c43ace187ae14e8f9c97e3c5d40fa914f2bf690b88b49dab303"} Dec 04 15:54:30 crc kubenswrapper[4878]: W1204 15:54:30.675767 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7834dc6_68d7_4afb_bbcd_d247294ba85b.slice/crio-84d6656eab9921d414d1b6bb9dbcb9e59aa6d0b8bc3261feb3f35ab021542426 WatchSource:0}: Error finding container 84d6656eab9921d414d1b6bb9dbcb9e59aa6d0b8bc3261feb3f35ab021542426: Status 404 returned error can't find the container with id 84d6656eab9921d414d1b6bb9dbcb9e59aa6d0b8bc3261feb3f35ab021542426 Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.189447 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70" path="/var/lib/kubelet/pods/543d48bd-a0a8-4aa6-a2aa-13b3e9f72f70/volumes" Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.189852 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be8fdcec-2eb3-4c8e-b8db-3e963e302016" path="/var/lib/kubelet/pods/be8fdcec-2eb3-4c8e-b8db-3e963e302016/volumes" Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.648745 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7834dc6-68d7-4afb-bbcd-d247294ba85b","Type":"ContainerStarted","Data":"84d6656eab9921d414d1b6bb9dbcb9e59aa6d0b8bc3261feb3f35ab021542426"} Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.650634 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38cb4c2-8301-4304-a2a8-beed07ff5c49","Type":"ContainerStarted","Data":"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa"} Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.652124 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 04 15:54:31 crc kubenswrapper[4878]: I1204 15:54:31.676820 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=8.610488032 podStartE2EDuration="25.676743808s" podCreationTimestamp="2025-12-04 15:54:06 +0000 UTC" firstStartedPulling="2025-12-04 15:54:14.376337373 +0000 UTC m=+1098.338874319" lastFinishedPulling="2025-12-04 15:54:31.442593129 +0000 UTC m=+1115.405130095" observedRunningTime="2025-12-04 15:54:31.671969307 +0000 UTC m=+1115.634506283" watchObservedRunningTime="2025-12-04 15:54:31.676743808 +0000 UTC m=+1115.639280764" Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.677122 4878 generic.go:334] "Generic (PLEG): container finished" podID="5efdf6a5-f2e2-4839-a976-39d5104d7d83" containerID="aae10ecc4d663ae70be09865ca7567e66996b8d3a2f7af7a631c6c51e3e741fe" exitCode=0 Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.677195 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cvfgn" event={"ID":"5efdf6a5-f2e2-4839-a976-39d5104d7d83","Type":"ContainerDied","Data":"aae10ecc4d663ae70be09865ca7567e66996b8d3a2f7af7a631c6c51e3e741fe"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.682557 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab2faf38-cbc6-4141-8553-58bad8a0675f","Type":"ContainerStarted","Data":"8f57204d3e54cd2bf4cc56ccb048998951e89777d87d3f889bc41478baa23eee"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.686783 4878 generic.go:334] "Generic (PLEG): container finished" podID="3463d397-8d58-4444-ac34-52a0597ca441" containerID="4077375efed7aeaffa4e3123bc2f732b1355090f4d6ad5c7455e1437d76875b4" exitCode=0 Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.686919 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3463d397-8d58-4444-ac34-52a0597ca441","Type":"ContainerDied","Data":"4077375efed7aeaffa4e3123bc2f732b1355090f4d6ad5c7455e1437d76875b4"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.690377 4878 generic.go:334] "Generic (PLEG): container finished" podID="9d775ef8-4c79-4ce4-b5bd-9d3290fb3256" containerID="12ec806cb786179a124b12e29630ffdbc6f40585059fa84bbea534a39b0e045f" exitCode=0 Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.690531 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256","Type":"ContainerDied","Data":"12ec806cb786179a124b12e29630ffdbc6f40585059fa84bbea534a39b0e045f"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.694979 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl" event={"ID":"76972b0d-60b4-427a-83fa-69d53c8c1e64","Type":"ContainerStarted","Data":"79f7c97e9436c625f98922e4051547a62ff95c406bd563a88b36878b2aeab0a5"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.695104 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-qt5xl" Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.698796 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7834dc6-68d7-4afb-bbcd-d247294ba85b","Type":"ContainerStarted","Data":"1772173b7ac7bd37200687af812c536162373a1c73bd98ee6ce66ce8426e8feb"} Dec 04 15:54:34 crc kubenswrapper[4878]: I1204 15:54:34.870302 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qt5xl" podStartSLOduration=21.793547797 podStartE2EDuration="25.870251902s" podCreationTimestamp="2025-12-04 15:54:09 +0000 UTC" firstStartedPulling="2025-12-04 15:54:29.785536632 +0000 UTC m=+1113.748073588" lastFinishedPulling="2025-12-04 15:54:33.862240737 +0000 UTC m=+1117.824777693" observedRunningTime="2025-12-04 15:54:34.868741174 +0000 UTC m=+1118.831278140" watchObservedRunningTime="2025-12-04 15:54:34.870251902 +0000 UTC m=+1118.832788858" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.295142 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.716786 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"3463d397-8d58-4444-ac34-52a0597ca441","Type":"ContainerStarted","Data":"65220cbe4d06848ef1193625262fda0e193e9a9c2f6b02986230f782386f2d8b"} Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.725162 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"9d775ef8-4c79-4ce4-b5bd-9d3290fb3256","Type":"ContainerStarted","Data":"f9136e1f2960a44f356b507a46042869780a34e646f47d701acd18cc57c8a127"} Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.728824 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cvfgn" event={"ID":"5efdf6a5-f2e2-4839-a976-39d5104d7d83","Type":"ContainerStarted","Data":"97b47fc61a7880a22c34a367645cb150eb3a8b1d7272477046b16167abc940c0"} Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.728921 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-cvfgn" event={"ID":"5efdf6a5-f2e2-4839-a976-39d5104d7d83","Type":"ContainerStarted","Data":"918b6112c226aeb4c6ab81fd3fba410834f087973fdc534fc1144849a3aebba1"} Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.728987 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.729018 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.743054 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.616338061 podStartE2EDuration="33.743033611s" podCreationTimestamp="2025-12-04 15:54:02 +0000 UTC" firstStartedPulling="2025-12-04 15:54:04.841414399 +0000 UTC m=+1088.803951355" lastFinishedPulling="2025-12-04 15:54:28.968109949 +0000 UTC m=+1112.930646905" observedRunningTime="2025-12-04 15:54:35.736901256 +0000 UTC m=+1119.699438222" watchObservedRunningTime="2025-12-04 15:54:35.743033611 +0000 UTC m=+1119.705570567" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.763924 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-cvfgn" podStartSLOduration=22.700980998 podStartE2EDuration="26.763899296s" podCreationTimestamp="2025-12-04 15:54:09 +0000 UTC" firstStartedPulling="2025-12-04 15:54:29.783210593 +0000 UTC m=+1113.745747549" lastFinishedPulling="2025-12-04 15:54:33.846128891 +0000 UTC m=+1117.808665847" observedRunningTime="2025-12-04 15:54:35.761203728 +0000 UTC m=+1119.723740704" watchObservedRunningTime="2025-12-04 15:54:35.763899296 +0000 UTC m=+1119.726436252" Dec 04 15:54:35 crc kubenswrapper[4878]: I1204 15:54:35.782417 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.729146399 podStartE2EDuration="32.782395112s" podCreationTimestamp="2025-12-04 15:54:03 +0000 UTC" firstStartedPulling="2025-12-04 15:54:06.854202498 +0000 UTC m=+1090.816739454" lastFinishedPulling="2025-12-04 15:54:28.907451221 +0000 UTC m=+1112.869988167" observedRunningTime="2025-12-04 15:54:35.78110979 +0000 UTC m=+1119.743646746" watchObservedRunningTime="2025-12-04 15:54:35.782395112 +0000 UTC m=+1119.744932068" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.380196 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.455376 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.477413 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.480562 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.509374 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.570279 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.570328 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvrsf\" (UniqueName: \"kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.570374 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.673978 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.674976 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.675265 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.675317 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvrsf\" (UniqueName: \"kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.676417 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.694242 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvrsf\" (UniqueName: \"kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf\") pod \"dnsmasq-dns-7cb5889db5-c6d8p\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:37 crc kubenswrapper[4878]: I1204 15:54:37.826426 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.628125 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.635692 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.638795 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.640110 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.640467 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-7bbph" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.640748 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.657353 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.725707 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.725795 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-cache\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.725927 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsvqb\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-kube-api-access-gsvqb\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.725975 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.726024 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-lock\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.755144 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.783147 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" event={"ID":"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc","Type":"ContainerDied","Data":"c0a714f201fd6195e0897d2152b1905848cc55e917ad7ee0f02286399d3da60e"} Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.783244 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-xsg7s" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.827132 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7mtq\" (UniqueName: \"kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq\") pod \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.827204 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc\") pod \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.828215 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" (UID: "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829487 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config\") pod \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\" (UID: \"6e6f3b9b-dba6-4d5f-b138-1c169fb069fc\") " Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829787 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsvqb\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-kube-api-access-gsvqb\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829854 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829905 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-lock\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829942 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.829991 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-cache\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: E1204 15:54:38.830066 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:38 crc kubenswrapper[4878]: E1204 15:54:38.830098 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.830100 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:38 crc kubenswrapper[4878]: E1204 15:54:38.830173 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:54:39.330144245 +0000 UTC m=+1123.292681281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.830333 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.830441 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config" (OuterVolumeSpecName: "config") pod "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" (UID: "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.830908 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-cache\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.831944 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/10b4321d-097d-4ab2-8014-63c5b80e6839-lock\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.836896 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq" (OuterVolumeSpecName: "kube-api-access-b7mtq") pod "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" (UID: "6e6f3b9b-dba6-4d5f-b138-1c169fb069fc"). InnerVolumeSpecName "kube-api-access-b7mtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.854380 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsvqb\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-kube-api-access-gsvqb\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.855808 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.932184 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:38 crc kubenswrapper[4878]: I1204 15:54:38.932227 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7mtq\" (UniqueName: \"kubernetes.io/projected/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc-kube-api-access-b7mtq\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.342332 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:39 crc kubenswrapper[4878]: E1204 15:54:39.342763 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:39 crc kubenswrapper[4878]: E1204 15:54:39.342799 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:39 crc kubenswrapper[4878]: E1204 15:54:39.342894 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:54:40.342844082 +0000 UTC m=+1124.305381038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.399646 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-55hsg"] Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.402212 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.415493 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.416642 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.416932 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.428279 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447400 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447453 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447492 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447546 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447572 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447619 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj72g\" (UniqueName: \"kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.447645 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.450573 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-xsg7s"] Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.465945 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-55hsg"] Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.552986 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553051 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553099 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553121 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553169 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj72g\" (UniqueName: \"kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553196 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.553235 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.556542 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.557343 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.557367 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.564502 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.565215 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.565610 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.566211 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.579160 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj72g\" (UniqueName: \"kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g\") pod \"swift-ring-rebalance-55hsg\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.775248 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.795586 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab2faf38-cbc6-4141-8553-58bad8a0675f","Type":"ContainerStarted","Data":"655ef6b83f0ba93543cc188cc30fbc4e36cb6acae89f53ab3ae810cb9fcf4c2c"} Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.797755 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" event={"ID":"af68b966-34f8-4dae-984b-e04817aa4a02","Type":"ContainerStarted","Data":"2148e642148a4ed769d89a115212dddea9ade130d4acd2df79e9c7ac6359413c"} Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.800091 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d7834dc6-68d7-4afb-bbcd-d247294ba85b","Type":"ContainerStarted","Data":"d1116e84c1956d4b2c0d2cbd98563f8512c94a8a214b4e7fb99e59184e483db1"} Dec 04 15:54:39 crc kubenswrapper[4878]: I1204 15:54:39.817639 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=19.511204624 podStartE2EDuration="28.817616983s" podCreationTimestamp="2025-12-04 15:54:11 +0000 UTC" firstStartedPulling="2025-12-04 15:54:29.786312881 +0000 UTC m=+1113.748849837" lastFinishedPulling="2025-12-04 15:54:39.09272524 +0000 UTC m=+1123.055262196" observedRunningTime="2025-12-04 15:54:39.812096234 +0000 UTC m=+1123.774633190" watchObservedRunningTime="2025-12-04 15:54:39.817616983 +0000 UTC m=+1123.780153929" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.233990 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.275047 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.296761 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.892596435 podStartE2EDuration="27.296737212s" podCreationTimestamp="2025-12-04 15:54:13 +0000 UTC" firstStartedPulling="2025-12-04 15:54:30.678223872 +0000 UTC m=+1114.640760828" lastFinishedPulling="2025-12-04 15:54:39.082364649 +0000 UTC m=+1123.044901605" observedRunningTime="2025-12-04 15:54:39.848623524 +0000 UTC m=+1123.811160490" watchObservedRunningTime="2025-12-04 15:54:40.296737212 +0000 UTC m=+1124.259274168" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.325063 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-55hsg"] Dec 04 15:54:40 crc kubenswrapper[4878]: W1204 15:54:40.331184 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947bbc4c_f673_433d_bc78_4411fea88516.slice/crio-992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56 WatchSource:0}: Error finding container 992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56: Status 404 returned error can't find the container with id 992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56 Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.373330 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:40 crc kubenswrapper[4878]: E1204 15:54:40.373514 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:40 crc kubenswrapper[4878]: E1204 15:54:40.373530 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:40 crc kubenswrapper[4878]: E1204 15:54:40.373579 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:54:42.373561758 +0000 UTC m=+1126.336098714 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.416503 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.810030 4878 generic.go:334] "Generic (PLEG): container finished" podID="af68b966-34f8-4dae-984b-e04817aa4a02" containerID="28565767df2206f235afdc1a8d304c5fd5cca5eac46d647199c755973547cd9b" exitCode=0 Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.810161 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" event={"ID":"af68b966-34f8-4dae-984b-e04817aa4a02","Type":"ContainerDied","Data":"28565767df2206f235afdc1a8d304c5fd5cca5eac46d647199c755973547cd9b"} Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.811059 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55hsg" event={"ID":"947bbc4c-f673-433d-bc78-4411fea88516","Type":"ContainerStarted","Data":"992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56"} Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.812635 4878 generic.go:334] "Generic (PLEG): container finished" podID="3469ba31-9f3d-444f-803b-87b26533a34a" containerID="0100b13f042d022fe1756bfc12c45166c0d85abeffe96373088d7af3d6387743" exitCode=0 Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.813864 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" event={"ID":"3469ba31-9f3d-444f-803b-87b26533a34a","Type":"ContainerDied","Data":"0100b13f042d022fe1756bfc12c45166c0d85abeffe96373088d7af3d6387743"} Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.813911 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:40 crc kubenswrapper[4878]: I1204 15:54:40.991539 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.195569 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6f3b9b-dba6-4d5f-b138-1c169fb069fc" path="/var/lib/kubelet/pods/6e6f3b9b-dba6-4d5f-b138-1c169fb069fc/volumes" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.329761 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.373963 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.375916 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.382701 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.432435 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vqhwx"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.441294 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.448408 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.472653 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497671 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovs-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497740 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw92z\" (UniqueName: \"kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497770 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovn-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497795 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phbbd\" (UniqueName: \"kubernetes.io/projected/63b68bea-2a97-49cb-bba4-86c730468f8d-kube-api-access-phbbd\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497835 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497858 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b68bea-2a97-49cb-bba4-86c730468f8d-config\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497908 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.497977 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.498056 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.498100 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.518990 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqhwx"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601251 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovs-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601311 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw92z\" (UniqueName: \"kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601334 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovn-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601354 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phbbd\" (UniqueName: \"kubernetes.io/projected/63b68bea-2a97-49cb-bba4-86c730468f8d-kube-api-access-phbbd\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601385 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601403 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b68bea-2a97-49cb-bba4-86c730468f8d-config\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601428 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601464 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601517 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.601538 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.602527 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.603317 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b68bea-2a97-49cb-bba4-86c730468f8d-config\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.603529 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.603924 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovs-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.604389 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63b68bea-2a97-49cb-bba4-86c730468f8d-ovn-rundir\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.604726 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.607668 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-combined-ca-bundle\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.613944 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63b68bea-2a97-49cb-bba4-86c730468f8d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.626783 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phbbd\" (UniqueName: \"kubernetes.io/projected/63b68bea-2a97-49cb-bba4-86c730468f8d-kube-api-access-phbbd\") pod \"ovn-controller-metrics-vqhwx\" (UID: \"63b68bea-2a97-49cb-bba4-86c730468f8d\") " pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.640763 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw92z\" (UniqueName: \"kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z\") pod \"dnsmasq-dns-74f6f696b9-4cww7\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.646665 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.715173 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.716689 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.719757 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.732764 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.735591 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.837958 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.838052 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt929\" (UniqueName: \"kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.838094 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.838384 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vqhwx" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.838999 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.839207 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.941529 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.941622 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt929\" (UniqueName: \"kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.941644 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.941672 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.941707 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.942848 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.943478 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.946773 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.974599 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:41 crc kubenswrapper[4878]: I1204 15:54:41.984903 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt929\" (UniqueName: \"kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929\") pod \"dnsmasq-dns-698758b865-wt5jb\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.236008 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.406240 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:42 crc kubenswrapper[4878]: E1204 15:54:42.406800 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:42 crc kubenswrapper[4878]: E1204 15:54:42.406819 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:42 crc kubenswrapper[4878]: E1204 15:54:42.406885 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:54:46.406851634 +0000 UTC m=+1130.369388590 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.415967 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.462669 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.515613 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:54:42 crc kubenswrapper[4878]: W1204 15:54:42.520169 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac85cc2d_2dde_4497_aa4d_92603905d41a.slice/crio-b6a50e5b02641864a162fe6925c354df5e4199a0e766ed3dc9f0fc4b6af90416 WatchSource:0}: Error finding container b6a50e5b02641864a162fe6925c354df5e4199a0e766ed3dc9f0fc4b6af90416: Status 404 returned error can't find the container with id b6a50e5b02641864a162fe6925c354df5e4199a0e766ed3dc9f0fc4b6af90416 Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.655741 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vqhwx"] Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.881490 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqhwx" event={"ID":"63b68bea-2a97-49cb-bba4-86c730468f8d","Type":"ContainerStarted","Data":"e2a9ce3833af92f3b637a5cd2b10943bf9bf1462fbee096a40f24208e2a7fbaa"} Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.883639 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" event={"ID":"ac85cc2d-2dde-4497-aa4d-92603905d41a","Type":"ContainerStarted","Data":"b6a50e5b02641864a162fe6925c354df5e4199a0e766ed3dc9f0fc4b6af90416"} Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.890109 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:54:42 crc kubenswrapper[4878]: W1204 15:54:42.896153 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod315550ac_d3ca_4736_abad_f1cb130fcc4a.slice/crio-3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d WatchSource:0}: Error finding container 3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d: Status 404 returned error can't find the container with id 3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d Dec 04 15:54:42 crc kubenswrapper[4878]: I1204 15:54:42.931093 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.144784 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.150512 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.153907 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-d6hrr" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.154176 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.154659 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.157779 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.165573 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.330353 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.330824 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.330890 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55gw\" (UniqueName: \"kubernetes.io/projected/1cade2c0-4d05-4beb-9bfb-003446587673-kube-api-access-q55gw\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.330941 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.331009 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-scripts\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.331058 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.331102 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-config\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433366 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433499 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433540 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55gw\" (UniqueName: \"kubernetes.io/projected/1cade2c0-4d05-4beb-9bfb-003446587673-kube-api-access-q55gw\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433637 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433689 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-scripts\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433726 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.433787 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-config\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.434376 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.435381 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-scripts\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.435557 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cade2c0-4d05-4beb-9bfb-003446587673-config\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.441439 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.441498 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.441844 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cade2c0-4d05-4beb-9bfb-003446587673-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.459302 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55gw\" (UniqueName: \"kubernetes.io/projected/1cade2c0-4d05-4beb-9bfb-003446587673-kube-api-access-q55gw\") pod \"ovn-northd-0\" (UID: \"1cade2c0-4d05-4beb-9bfb-003446587673\") " pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.473265 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.909770 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.920301 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.926930 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wt5jb" event={"ID":"315550ac-d3ca-4736-abad-f1cb130fcc4a","Type":"ContainerStarted","Data":"3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d"} Dec 04 15:54:43 crc kubenswrapper[4878]: I1204 15:54:43.948678 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 04 15:54:43 crc kubenswrapper[4878]: W1204 15:54:43.952313 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cade2c0_4d05_4beb_9bfb_003446587673.slice/crio-933a6f0c5a696ca6d3523e83f99f4fc17c21cb7baf28a3b52b140612a9a9fca0 WatchSource:0}: Error finding container 933a6f0c5a696ca6d3523e83f99f4fc17c21cb7baf28a3b52b140612a9a9fca0: Status 404 returned error can't find the container with id 933a6f0c5a696ca6d3523e83f99f4fc17c21cb7baf28a3b52b140612a9a9fca0 Dec 04 15:54:44 crc kubenswrapper[4878]: I1204 15:54:44.939440 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cade2c0-4d05-4beb-9bfb-003446587673","Type":"ContainerStarted","Data":"933a6f0c5a696ca6d3523e83f99f4fc17c21cb7baf28a3b52b140612a9a9fca0"} Dec 04 15:54:45 crc kubenswrapper[4878]: I1204 15:54:45.217842 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:45 crc kubenswrapper[4878]: I1204 15:54:45.218132 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:46 crc kubenswrapper[4878]: I1204 15:54:46.504594 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:46 crc kubenswrapper[4878]: E1204 15:54:46.504780 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:46 crc kubenswrapper[4878]: E1204 15:54:46.504935 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:46 crc kubenswrapper[4878]: E1204 15:54:46.505000 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:54:54.504978648 +0000 UTC m=+1138.467515614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:48 crc kubenswrapper[4878]: I1204 15:54:48.973490 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerStarted","Data":"aa30f83cdc9e3faac13e73e9ab0e7b955edf28e4e8f407845d5f126ae6eafd7f"} Dec 04 15:54:50 crc kubenswrapper[4878]: I1204 15:54:50.995040 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vqhwx" event={"ID":"63b68bea-2a97-49cb-bba4-86c730468f8d","Type":"ContainerStarted","Data":"3c886a29d85064103674275b573a6346255e8556477414d4eb1328243d08190a"} Dec 04 15:54:50 crc kubenswrapper[4878]: I1204 15:54:50.997251 4878 generic.go:334] "Generic (PLEG): container finished" podID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerID="71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901" exitCode=0 Dec 04 15:54:50 crc kubenswrapper[4878]: I1204 15:54:50.997369 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" event={"ID":"ac85cc2d-2dde-4497-aa4d-92603905d41a","Type":"ContainerDied","Data":"71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901"} Dec 04 15:54:50 crc kubenswrapper[4878]: I1204 15:54:50.999802 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerStarted","Data":"e92152e572870aaebeb8c9f7a1ceb576d574fd9f0c0c2b871d6ac359655a8c97"} Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.002163 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" event={"ID":"3469ba31-9f3d-444f-803b-87b26533a34a","Type":"ContainerStarted","Data":"31ce0348b76213f50631f3231bb6f47eaaf4ddc817d366a941bd8eb099fd1ae8"} Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.002243 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.002252 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="dnsmasq-dns" containerID="cri-o://31ce0348b76213f50631f3231bb6f47eaaf4ddc817d366a941bd8eb099fd1ae8" gracePeriod=10 Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.005119 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" event={"ID":"af68b966-34f8-4dae-984b-e04817aa4a02","Type":"ContainerStarted","Data":"87ee758c5c2cbdc49e03aab8639c74ed31a589082036128b06db0ed190beea27"} Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.005207 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.005222 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="dnsmasq-dns" containerID="cri-o://87ee758c5c2cbdc49e03aab8639c74ed31a589082036128b06db0ed190beea27" gracePeriod=10 Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.006938 4878 generic.go:334] "Generic (PLEG): container finished" podID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerID="28a9e7c397ffa4fdd05da38ec32cf6abd29d30987a52ae642e1627d10a7f2fc0" exitCode=0 Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.007182 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wt5jb" event={"ID":"315550ac-d3ca-4736-abad-f1cb130fcc4a","Type":"ContainerDied","Data":"28a9e7c397ffa4fdd05da38ec32cf6abd29d30987a52ae642e1627d10a7f2fc0"} Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.025121 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vqhwx" podStartSLOduration=10.025091584 podStartE2EDuration="10.025091584s" podCreationTimestamp="2025-12-04 15:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:54:51.016523418 +0000 UTC m=+1134.979060384" watchObservedRunningTime="2025-12-04 15:54:51.025091584 +0000 UTC m=+1134.987628540" Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.089755 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" podStartSLOduration=13.081698639 podStartE2EDuration="51.089727292s" podCreationTimestamp="2025-12-04 15:54:00 +0000 UTC" firstStartedPulling="2025-12-04 15:54:01.689445093 +0000 UTC m=+1085.651982049" lastFinishedPulling="2025-12-04 15:54:39.697473746 +0000 UTC m=+1123.660010702" observedRunningTime="2025-12-04 15:54:51.086716496 +0000 UTC m=+1135.049253442" watchObservedRunningTime="2025-12-04 15:54:51.089727292 +0000 UTC m=+1135.052264248" Dec 04 15:54:51 crc kubenswrapper[4878]: I1204 15:54:51.160700 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" podStartSLOduration=13.359483226 podStartE2EDuration="14.16067836s" podCreationTimestamp="2025-12-04 15:54:37 +0000 UTC" firstStartedPulling="2025-12-04 15:54:39.568286921 +0000 UTC m=+1123.530823877" lastFinishedPulling="2025-12-04 15:54:40.369482055 +0000 UTC m=+1124.332019011" observedRunningTime="2025-12-04 15:54:51.128571841 +0000 UTC m=+1135.091108807" watchObservedRunningTime="2025-12-04 15:54:51.16067836 +0000 UTC m=+1135.123215316" Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.031054 4878 generic.go:334] "Generic (PLEG): container finished" podID="3469ba31-9f3d-444f-803b-87b26533a34a" containerID="31ce0348b76213f50631f3231bb6f47eaaf4ddc817d366a941bd8eb099fd1ae8" exitCode=0 Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.031149 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" event={"ID":"3469ba31-9f3d-444f-803b-87b26533a34a","Type":"ContainerDied","Data":"31ce0348b76213f50631f3231bb6f47eaaf4ddc817d366a941bd8eb099fd1ae8"} Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.035536 4878 generic.go:334] "Generic (PLEG): container finished" podID="af68b966-34f8-4dae-984b-e04817aa4a02" containerID="87ee758c5c2cbdc49e03aab8639c74ed31a589082036128b06db0ed190beea27" exitCode=0 Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.041569 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" event={"ID":"af68b966-34f8-4dae-984b-e04817aa4a02","Type":"ContainerDied","Data":"87ee758c5c2cbdc49e03aab8639c74ed31a589082036128b06db0ed190beea27"} Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.549779 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.687041 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.697047 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:52 crc kubenswrapper[4878]: I1204 15:54:52.805386 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.643419 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.696211 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.760228 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config\") pod \"3469ba31-9f3d-444f-803b-87b26533a34a\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.761033 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zzh\" (UniqueName: \"kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh\") pod \"3469ba31-9f3d-444f-803b-87b26533a34a\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.761073 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc\") pod \"3469ba31-9f3d-444f-803b-87b26533a34a\" (UID: \"3469ba31-9f3d-444f-803b-87b26533a34a\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.778614 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh" (OuterVolumeSpecName: "kube-api-access-l9zzh") pod "3469ba31-9f3d-444f-803b-87b26533a34a" (UID: "3469ba31-9f3d-444f-803b-87b26533a34a"). InnerVolumeSpecName "kube-api-access-l9zzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.845128 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3469ba31-9f3d-444f-803b-87b26533a34a" (UID: "3469ba31-9f3d-444f-803b-87b26533a34a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.860180 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config" (OuterVolumeSpecName: "config") pod "3469ba31-9f3d-444f-803b-87b26533a34a" (UID: "3469ba31-9f3d-444f-803b-87b26533a34a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.862743 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvrsf\" (UniqueName: \"kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf\") pod \"af68b966-34f8-4dae-984b-e04817aa4a02\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.862840 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config\") pod \"af68b966-34f8-4dae-984b-e04817aa4a02\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.863249 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc\") pod \"af68b966-34f8-4dae-984b-e04817aa4a02\" (UID: \"af68b966-34f8-4dae-984b-e04817aa4a02\") " Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.863804 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.863841 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zzh\" (UniqueName: \"kubernetes.io/projected/3469ba31-9f3d-444f-803b-87b26533a34a-kube-api-access-l9zzh\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.863853 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3469ba31-9f3d-444f-803b-87b26533a34a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.866804 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf" (OuterVolumeSpecName: "kube-api-access-wvrsf") pod "af68b966-34f8-4dae-984b-e04817aa4a02" (UID: "af68b966-34f8-4dae-984b-e04817aa4a02"). InnerVolumeSpecName "kube-api-access-wvrsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.906374 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config" (OuterVolumeSpecName: "config") pod "af68b966-34f8-4dae-984b-e04817aa4a02" (UID: "af68b966-34f8-4dae-984b-e04817aa4a02"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.915032 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "af68b966-34f8-4dae-984b-e04817aa4a02" (UID: "af68b966-34f8-4dae-984b-e04817aa4a02"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.965913 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.965962 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvrsf\" (UniqueName: \"kubernetes.io/projected/af68b966-34f8-4dae-984b-e04817aa4a02-kube-api-access-wvrsf\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:53 crc kubenswrapper[4878]: I1204 15:54:53.965984 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af68b966-34f8-4dae-984b-e04817aa4a02-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.068833 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55hsg" event={"ID":"947bbc4c-f673-433d-bc78-4411fea88516","Type":"ContainerStarted","Data":"23fc4aeb80322313b832a3ea49f1cc3686fc2c3b91566dbc5dae9fbd91b4f869"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.073683 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" event={"ID":"ac85cc2d-2dde-4497-aa4d-92603905d41a","Type":"ContainerStarted","Data":"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.074283 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.076718 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" event={"ID":"3469ba31-9f3d-444f-803b-87b26533a34a","Type":"ContainerDied","Data":"02897c99779600c5ddc3dfece2539ff4344e64f2e268e31a3e2f25d951e2f79a"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.076779 4878 scope.go:117] "RemoveContainer" containerID="31ce0348b76213f50631f3231bb6f47eaaf4ddc817d366a941bd8eb099fd1ae8" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.076777 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-49lsz" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.079958 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.081214 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-c6d8p" event={"ID":"af68b966-34f8-4dae-984b-e04817aa4a02","Type":"ContainerDied","Data":"2148e642148a4ed769d89a115212dddea9ade130d4acd2df79e9c7ac6359413c"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.081538 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cade2c0-4d05-4beb-9bfb-003446587673","Type":"ContainerStarted","Data":"2c1fedf02dc87146a60bc2aeaeb7cbb5b448d7d7cf46e526a649ee1d283bd147"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.083305 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wt5jb" event={"ID":"315550ac-d3ca-4736-abad-f1cb130fcc4a","Type":"ContainerStarted","Data":"8d304619dda8e5845b94774d4bb834b7688f3f9bcf47949d9acfbbddc368b96c"} Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.084160 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.096286 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-55hsg" podStartSLOduration=1.914903996 podStartE2EDuration="15.096264057s" podCreationTimestamp="2025-12-04 15:54:39 +0000 UTC" firstStartedPulling="2025-12-04 15:54:40.333586861 +0000 UTC m=+1124.296123817" lastFinishedPulling="2025-12-04 15:54:53.514946932 +0000 UTC m=+1137.477483878" observedRunningTime="2025-12-04 15:54:54.093472557 +0000 UTC m=+1138.056009503" watchObservedRunningTime="2025-12-04 15:54:54.096264057 +0000 UTC m=+1138.058801013" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.106606 4878 scope.go:117] "RemoveContainer" containerID="0100b13f042d022fe1756bfc12c45166c0d85abeffe96373088d7af3d6387743" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.129083 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-wt5jb" podStartSLOduration=13.129054413 podStartE2EDuration="13.129054413s" podCreationTimestamp="2025-12-04 15:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:54:54.125981316 +0000 UTC m=+1138.088518272" watchObservedRunningTime="2025-12-04 15:54:54.129054413 +0000 UTC m=+1138.091591369" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.143310 4878 scope.go:117] "RemoveContainer" containerID="87ee758c5c2cbdc49e03aab8639c74ed31a589082036128b06db0ed190beea27" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.151896 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" podStartSLOduration=13.151849718 podStartE2EDuration="13.151849718s" podCreationTimestamp="2025-12-04 15:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:54:54.148847472 +0000 UTC m=+1138.111384418" watchObservedRunningTime="2025-12-04 15:54:54.151849718 +0000 UTC m=+1138.114386674" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.176187 4878 scope.go:117] "RemoveContainer" containerID="28565767df2206f235afdc1a8d304c5fd5cca5eac46d647199c755973547cd9b" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.180612 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.189495 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-c6d8p"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.198039 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.207031 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-49lsz"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.505741 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.506039 4878 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.506099 4878 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.506203 4878 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift podName:10b4321d-097d-4ab2-8014-63c5b80e6839 nodeName:}" failed. No retries permitted until 2025-12-04 15:55:10.506175413 +0000 UTC m=+1154.468712369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift") pod "swift-storage-0" (UID: "10b4321d-097d-4ab2-8014-63c5b80e6839") : configmap "swift-ring-files" not found Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.951895 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-cpfz9"] Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.952446 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="init" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952467 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="init" Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.952493 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952503 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.952538 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952548 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: E1204 15:54:54.952568 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="init" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952576 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="init" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952791 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.952823 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" containerName="dnsmasq-dns" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.953725 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.975523 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8ba5-account-create-update-8k8xj"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.977137 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.991816 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cpfz9"] Dec 04 15:54:54 crc kubenswrapper[4878]: I1204 15:54:54.994542 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.008678 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8ba5-account-create-update-8k8xj"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.121497 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.121576 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.121666 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzkdv\" (UniqueName: \"kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.121748 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9knlv\" (UniqueName: \"kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.130414 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cade2c0-4d05-4beb-9bfb-003446587673","Type":"ContainerStarted","Data":"8d6bae180c77f614653ad0116ca42ddacf0c165ed88d54976a9fcf71e295657a"} Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.132046 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.168089 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-4gmz6"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.169944 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.175708 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.645461003 podStartE2EDuration="12.17567419s" podCreationTimestamp="2025-12-04 15:54:43 +0000 UTC" firstStartedPulling="2025-12-04 15:54:43.954061892 +0000 UTC m=+1127.916598848" lastFinishedPulling="2025-12-04 15:54:53.484275089 +0000 UTC m=+1137.446812035" observedRunningTime="2025-12-04 15:54:55.168040678 +0000 UTC m=+1139.130577634" watchObservedRunningTime="2025-12-04 15:54:55.17567419 +0000 UTC m=+1139.138211146" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.204994 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3469ba31-9f3d-444f-803b-87b26533a34a" path="/var/lib/kubelet/pods/3469ba31-9f3d-444f-803b-87b26533a34a/volumes" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.205686 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af68b966-34f8-4dae-984b-e04817aa4a02" path="/var/lib/kubelet/pods/af68b966-34f8-4dae-984b-e04817aa4a02/volumes" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.212212 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4gmz6"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.223449 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.223924 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.224010 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzkdv\" (UniqueName: \"kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.224099 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9knlv\" (UniqueName: \"kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.225040 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.225464 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.252553 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzkdv\" (UniqueName: \"kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv\") pod \"keystone-8ba5-account-create-update-8k8xj\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.254425 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9knlv\" (UniqueName: \"kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv\") pod \"keystone-db-create-cpfz9\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.281664 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.305036 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.329509 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.329623 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6smcr\" (UniqueName: \"kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.377028 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c8e-account-create-update-s7lxm"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.378477 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.386636 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.409665 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8e-account-create-update-s7lxm"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.442461 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.442556 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6smcr\" (UniqueName: \"kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.444385 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.489651 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6smcr\" (UniqueName: \"kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr\") pod \"placement-db-create-4gmz6\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.495992 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-mrm88"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.497463 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.499119 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.502992 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mrm88"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.543825 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xj95\" (UniqueName: \"kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.543901 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.589017 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-90b3-account-create-update-hm8lq"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.590888 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.596713 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.606269 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-90b3-account-create-update-hm8lq"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.646490 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xs28\" (UniqueName: \"kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.646545 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xj95\" (UniqueName: \"kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.646587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.646608 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.647522 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.667243 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xj95\" (UniqueName: \"kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95\") pod \"placement-7c8e-account-create-update-s7lxm\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.723915 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.748488 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.748610 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpr54\" (UniqueName: \"kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.748736 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.748790 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xs28\" (UniqueName: \"kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.750124 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.769131 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xs28\" (UniqueName: \"kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28\") pod \"glance-db-create-mrm88\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.851136 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpr54\" (UniqueName: \"kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.851260 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.852635 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.867305 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-cpfz9"] Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.874719 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpr54\" (UniqueName: \"kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54\") pod \"glance-90b3-account-create-update-hm8lq\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.909732 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mrm88" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.936657 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:55 crc kubenswrapper[4878]: I1204 15:54:55.971797 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8ba5-account-create-update-8k8xj"] Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.088017 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-4gmz6"] Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.211793 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cpfz9" event={"ID":"55939a1d-54f3-4c84-a201-dc129636438b","Type":"ContainerStarted","Data":"ef1c70716a55a667d49a91cfd53cd687e55221ee9c50714540258a8479327c28"} Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.215570 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cpfz9" event={"ID":"55939a1d-54f3-4c84-a201-dc129636438b","Type":"ContainerStarted","Data":"baa3ef97e9e936914db50542669395f09ed2a02cd1fecd3284378ba7580c1043"} Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.224679 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4gmz6" event={"ID":"2aa82d2a-1f5f-4689-bb38-fc00144e2174","Type":"ContainerStarted","Data":"c8175792a6858d8efe1a7cc2a5886fcf44b0beb38178bac40536256b807af97b"} Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.228025 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ba5-account-create-update-8k8xj" event={"ID":"44154d84-cb0b-4894-9b50-4a93fafc5136","Type":"ContainerStarted","Data":"edc0abe61fe4ca1c4ef8d90648c5850e3e8cd2c4a9e4a10b7128d813a0711598"} Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.245137 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-cpfz9" podStartSLOduration=2.245119443 podStartE2EDuration="2.245119443s" podCreationTimestamp="2025-12-04 15:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:54:56.240783054 +0000 UTC m=+1140.203320020" watchObservedRunningTime="2025-12-04 15:54:56.245119443 +0000 UTC m=+1140.207656399" Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.262156 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8ba5-account-create-update-8k8xj" podStartSLOduration=2.262124292 podStartE2EDuration="2.262124292s" podCreationTimestamp="2025-12-04 15:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:54:56.260183443 +0000 UTC m=+1140.222720399" watchObservedRunningTime="2025-12-04 15:54:56.262124292 +0000 UTC m=+1140.224661248" Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.283219 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c8e-account-create-update-s7lxm"] Dec 04 15:54:56 crc kubenswrapper[4878]: W1204 15:54:56.293371 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dda6827_59a8_4cdf_9446_555d17a5793a.slice/crio-9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1 WatchSource:0}: Error finding container 9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1: Status 404 returned error can't find the container with id 9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1 Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.476266 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-mrm88"] Dec 04 15:54:56 crc kubenswrapper[4878]: I1204 15:54:56.559260 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-90b3-account-create-update-hm8lq"] Dec 04 15:54:56 crc kubenswrapper[4878]: W1204 15:54:56.624735 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa6917aa_17e3_4bff_b2e9_c5101344b039.slice/crio-9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210 WatchSource:0}: Error finding container 9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210: Status 404 returned error can't find the container with id 9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.251233 4878 generic.go:334] "Generic (PLEG): container finished" podID="fa6917aa-17e3-4bff-b2e9-c5101344b039" containerID="8f039a9ac2f7c154f5e6bf61bdd9cd061eb8847b21939c5279350a0937f3db44" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.251340 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-90b3-account-create-update-hm8lq" event={"ID":"fa6917aa-17e3-4bff-b2e9-c5101344b039","Type":"ContainerDied","Data":"8f039a9ac2f7c154f5e6bf61bdd9cd061eb8847b21939c5279350a0937f3db44"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.251960 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-90b3-account-create-update-hm8lq" event={"ID":"fa6917aa-17e3-4bff-b2e9-c5101344b039","Type":"ContainerStarted","Data":"9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.258413 4878 generic.go:334] "Generic (PLEG): container finished" podID="55939a1d-54f3-4c84-a201-dc129636438b" containerID="ef1c70716a55a667d49a91cfd53cd687e55221ee9c50714540258a8479327c28" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.258631 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cpfz9" event={"ID":"55939a1d-54f3-4c84-a201-dc129636438b","Type":"ContainerDied","Data":"ef1c70716a55a667d49a91cfd53cd687e55221ee9c50714540258a8479327c28"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.266718 4878 generic.go:334] "Generic (PLEG): container finished" podID="7dda6827-59a8-4cdf-9446-555d17a5793a" containerID="bc31cf9cbe560c148b9d233c18d41b8afe51829ddc31bba4725f47e2778c6455" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.266814 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8e-account-create-update-s7lxm" event={"ID":"7dda6827-59a8-4cdf-9446-555d17a5793a","Type":"ContainerDied","Data":"bc31cf9cbe560c148b9d233c18d41b8afe51829ddc31bba4725f47e2778c6455"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.266856 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8e-account-create-update-s7lxm" event={"ID":"7dda6827-59a8-4cdf-9446-555d17a5793a","Type":"ContainerStarted","Data":"9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.271116 4878 generic.go:334] "Generic (PLEG): container finished" podID="2aa82d2a-1f5f-4689-bb38-fc00144e2174" containerID="1b7794a84c84c939e2d37b3a52286d63f8ba17f7b2b9ca711e4e055240d86b93" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.271233 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4gmz6" event={"ID":"2aa82d2a-1f5f-4689-bb38-fc00144e2174","Type":"ContainerDied","Data":"1b7794a84c84c939e2d37b3a52286d63f8ba17f7b2b9ca711e4e055240d86b93"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.274387 4878 generic.go:334] "Generic (PLEG): container finished" podID="2737dc43-bf57-49c6-ab53-06ba48bfc80a" containerID="4b482636a5d728ff14eb52c5711a9c4a87266c3d0ba8e23ea72c7c2f8e30b293" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.274465 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mrm88" event={"ID":"2737dc43-bf57-49c6-ab53-06ba48bfc80a","Type":"ContainerDied","Data":"4b482636a5d728ff14eb52c5711a9c4a87266c3d0ba8e23ea72c7c2f8e30b293"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.274505 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mrm88" event={"ID":"2737dc43-bf57-49c6-ab53-06ba48bfc80a","Type":"ContainerStarted","Data":"38197aa4cbf5ceb61008342b282131b755a911ebd85c3b06e7da6bc69dc384a7"} Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.276925 4878 generic.go:334] "Generic (PLEG): container finished" podID="44154d84-cb0b-4894-9b50-4a93fafc5136" containerID="d2d2dd892ba97197e161dc0633a5ec3d2e7f17c30965eacdc60641d615fdcb93" exitCode=0 Dec 04 15:54:57 crc kubenswrapper[4878]: I1204 15:54:57.278017 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ba5-account-create-update-8k8xj" event={"ID":"44154d84-cb0b-4894-9b50-4a93fafc5136","Type":"ContainerDied","Data":"d2d2dd892ba97197e161dc0633a5ec3d2e7f17c30965eacdc60641d615fdcb93"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.369948 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.478183 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpr54\" (UniqueName: \"kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54\") pod \"fa6917aa-17e3-4bff-b2e9-c5101344b039\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.478383 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts\") pod \"fa6917aa-17e3-4bff-b2e9-c5101344b039\" (UID: \"fa6917aa-17e3-4bff-b2e9-c5101344b039\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.479223 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fa6917aa-17e3-4bff-b2e9-c5101344b039" (UID: "fa6917aa-17e3-4bff-b2e9-c5101344b039"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.490853 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54" (OuterVolumeSpecName: "kube-api-access-kpr54") pod "fa6917aa-17e3-4bff-b2e9-c5101344b039" (UID: "fa6917aa-17e3-4bff-b2e9-c5101344b039"). InnerVolumeSpecName "kube-api-access-kpr54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.580555 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpr54\" (UniqueName: \"kubernetes.io/projected/fa6917aa-17e3-4bff-b2e9-c5101344b039-kube-api-access-kpr54\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.580590 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fa6917aa-17e3-4bff-b2e9-c5101344b039-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.592286 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.597741 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.609855 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.671228 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mrm88" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.681084 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.681380 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6smcr\" (UniqueName: \"kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr\") pod \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.681559 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts\") pod \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\" (UID: \"2aa82d2a-1f5f-4689-bb38-fc00144e2174\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.681599 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts\") pod \"55939a1d-54f3-4c84-a201-dc129636438b\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.681723 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9knlv\" (UniqueName: \"kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv\") pod \"55939a1d-54f3-4c84-a201-dc129636438b\" (UID: \"55939a1d-54f3-4c84-a201-dc129636438b\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.682279 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2aa82d2a-1f5f-4689-bb38-fc00144e2174" (UID: "2aa82d2a-1f5f-4689-bb38-fc00144e2174"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.682351 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55939a1d-54f3-4c84-a201-dc129636438b" (UID: "55939a1d-54f3-4c84-a201-dc129636438b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.685830 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr" (OuterVolumeSpecName: "kube-api-access-6smcr") pod "2aa82d2a-1f5f-4689-bb38-fc00144e2174" (UID: "2aa82d2a-1f5f-4689-bb38-fc00144e2174"). InnerVolumeSpecName "kube-api-access-6smcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.691377 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv" (OuterVolumeSpecName: "kube-api-access-9knlv") pod "55939a1d-54f3-4c84-a201-dc129636438b" (UID: "55939a1d-54f3-4c84-a201-dc129636438b"). InnerVolumeSpecName "kube-api-access-9knlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783458 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts\") pod \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783536 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xs28\" (UniqueName: \"kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28\") pod \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\" (UID: \"2737dc43-bf57-49c6-ab53-06ba48bfc80a\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783629 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzkdv\" (UniqueName: \"kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv\") pod \"44154d84-cb0b-4894-9b50-4a93fafc5136\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783680 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts\") pod \"44154d84-cb0b-4894-9b50-4a93fafc5136\" (UID: \"44154d84-cb0b-4894-9b50-4a93fafc5136\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783709 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts\") pod \"7dda6827-59a8-4cdf-9446-555d17a5793a\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.783766 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xj95\" (UniqueName: \"kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95\") pod \"7dda6827-59a8-4cdf-9446-555d17a5793a\" (UID: \"7dda6827-59a8-4cdf-9446-555d17a5793a\") " Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.784414 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2aa82d2a-1f5f-4689-bb38-fc00144e2174-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.784446 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55939a1d-54f3-4c84-a201-dc129636438b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.784458 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9knlv\" (UniqueName: \"kubernetes.io/projected/55939a1d-54f3-4c84-a201-dc129636438b-kube-api-access-9knlv\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.784474 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6smcr\" (UniqueName: \"kubernetes.io/projected/2aa82d2a-1f5f-4689-bb38-fc00144e2174-kube-api-access-6smcr\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.785742 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2737dc43-bf57-49c6-ab53-06ba48bfc80a" (UID: "2737dc43-bf57-49c6-ab53-06ba48bfc80a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.785753 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7dda6827-59a8-4cdf-9446-555d17a5793a" (UID: "7dda6827-59a8-4cdf-9446-555d17a5793a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.785777 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "44154d84-cb0b-4894-9b50-4a93fafc5136" (UID: "44154d84-cb0b-4894-9b50-4a93fafc5136"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.788549 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv" (OuterVolumeSpecName: "kube-api-access-kzkdv") pod "44154d84-cb0b-4894-9b50-4a93fafc5136" (UID: "44154d84-cb0b-4894-9b50-4a93fafc5136"). InnerVolumeSpecName "kube-api-access-kzkdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.788614 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28" (OuterVolumeSpecName: "kube-api-access-7xs28") pod "2737dc43-bf57-49c6-ab53-06ba48bfc80a" (UID: "2737dc43-bf57-49c6-ab53-06ba48bfc80a"). InnerVolumeSpecName "kube-api-access-7xs28". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.789007 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95" (OuterVolumeSpecName: "kube-api-access-5xj95") pod "7dda6827-59a8-4cdf-9446-555d17a5793a" (UID: "7dda6827-59a8-4cdf-9446-555d17a5793a"). InnerVolumeSpecName "kube-api-access-5xj95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.807599 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-90b3-account-create-update-hm8lq" event={"ID":"fa6917aa-17e3-4bff-b2e9-c5101344b039","Type":"ContainerDied","Data":"9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.807655 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da4d33948be0975170dbda63680fd2d49ec764caccb1ff9a82d3b1e20cbb210" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.807726 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-90b3-account-create-update-hm8lq" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.818956 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-4gmz6" event={"ID":"2aa82d2a-1f5f-4689-bb38-fc00144e2174","Type":"ContainerDied","Data":"c8175792a6858d8efe1a7cc2a5886fcf44b0beb38178bac40536256b807af97b"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.819004 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8175792a6858d8efe1a7cc2a5886fcf44b0beb38178bac40536256b807af97b" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.819070 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-4gmz6" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.822801 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-mrm88" event={"ID":"2737dc43-bf57-49c6-ab53-06ba48bfc80a","Type":"ContainerDied","Data":"38197aa4cbf5ceb61008342b282131b755a911ebd85c3b06e7da6bc69dc384a7"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.822881 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38197aa4cbf5ceb61008342b282131b755a911ebd85c3b06e7da6bc69dc384a7" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.822989 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-mrm88" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.830036 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8ba5-account-create-update-8k8xj" event={"ID":"44154d84-cb0b-4894-9b50-4a93fafc5136","Type":"ContainerDied","Data":"edc0abe61fe4ca1c4ef8d90648c5850e3e8cd2c4a9e4a10b7128d813a0711598"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.830088 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edc0abe61fe4ca1c4ef8d90648c5850e3e8cd2c4a9e4a10b7128d813a0711598" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.830157 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8ba5-account-create-update-8k8xj" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.833217 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-cpfz9" event={"ID":"55939a1d-54f3-4c84-a201-dc129636438b","Type":"ContainerDied","Data":"baa3ef97e9e936914db50542669395f09ed2a02cd1fecd3284378ba7580c1043"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.833259 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa3ef97e9e936914db50542669395f09ed2a02cd1fecd3284378ba7580c1043" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.833268 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-cpfz9" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.834823 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c8e-account-create-update-s7lxm" event={"ID":"7dda6827-59a8-4cdf-9446-555d17a5793a","Type":"ContainerDied","Data":"9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1"} Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.834851 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecc404c9e99a36a64040d2a46a50b75b8cb326e8c56750dab24e7f3bbf0c0a1" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.834942 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c8e-account-create-update-s7lxm" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886740 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dda6827-59a8-4cdf-9446-555d17a5793a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886791 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xj95\" (UniqueName: \"kubernetes.io/projected/7dda6827-59a8-4cdf-9446-555d17a5793a-kube-api-access-5xj95\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886803 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2737dc43-bf57-49c6-ab53-06ba48bfc80a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886812 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xs28\" (UniqueName: \"kubernetes.io/projected/2737dc43-bf57-49c6-ab53-06ba48bfc80a-kube-api-access-7xs28\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886822 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzkdv\" (UniqueName: \"kubernetes.io/projected/44154d84-cb0b-4894-9b50-4a93fafc5136-kube-api-access-kzkdv\") on node \"crc\" DevicePath \"\"" Dec 04 15:54:59 crc kubenswrapper[4878]: I1204 15:54:59.886832 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/44154d84-cb0b-4894-9b50-4a93fafc5136-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:01 crc kubenswrapper[4878]: I1204 15:55:01.747160 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.239121 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.319593 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.320701 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="dnsmasq-dns" containerID="cri-o://fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe" gracePeriod=10 Dec 04 15:55:02 crc kubenswrapper[4878]: E1204 15:55:02.569382 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac85cc2d_2dde_4497_aa4d_92603905d41a.slice/crio-conmon-fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947bbc4c_f673_433d_bc78_4411fea88516.slice/crio-conmon-23fc4aeb80322313b832a3ea49f1cc3686fc2c3b91566dbc5dae9fbd91b4f869.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod947bbc4c_f673_433d_bc78_4411fea88516.slice/crio-23fc4aeb80322313b832a3ea49f1cc3686fc2c3b91566dbc5dae9fbd91b4f869.scope\": RecentStats: unable to find data in memory cache]" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.818590 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.866676 4878 generic.go:334] "Generic (PLEG): container finished" podID="947bbc4c-f673-433d-bc78-4411fea88516" containerID="23fc4aeb80322313b832a3ea49f1cc3686fc2c3b91566dbc5dae9fbd91b4f869" exitCode=0 Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.866795 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55hsg" event={"ID":"947bbc4c-f673-433d-bc78-4411fea88516","Type":"ContainerDied","Data":"23fc4aeb80322313b832a3ea49f1cc3686fc2c3b91566dbc5dae9fbd91b4f869"} Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.870646 4878 generic.go:334] "Generic (PLEG): container finished" podID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerID="fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe" exitCode=0 Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.870729 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.870731 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" event={"ID":"ac85cc2d-2dde-4497-aa4d-92603905d41a","Type":"ContainerDied","Data":"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe"} Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.870799 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6f696b9-4cww7" event={"ID":"ac85cc2d-2dde-4497-aa4d-92603905d41a","Type":"ContainerDied","Data":"b6a50e5b02641864a162fe6925c354df5e4199a0e766ed3dc9f0fc4b6af90416"} Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.870828 4878 scope.go:117] "RemoveContainer" containerID="fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.896907 4878 scope.go:117] "RemoveContainer" containerID="71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.920793 4878 scope.go:117] "RemoveContainer" containerID="fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe" Dec 04 15:55:02 crc kubenswrapper[4878]: E1204 15:55:02.923137 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe\": container with ID starting with fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe not found: ID does not exist" containerID="fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.923194 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe"} err="failed to get container status \"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe\": rpc error: code = NotFound desc = could not find container \"fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe\": container with ID starting with fc31e1c59873dc3a86a39be78fb1ae75f5a1b1c96e03c09f6c90dde79b716fbe not found: ID does not exist" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.923229 4878 scope.go:117] "RemoveContainer" containerID="71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901" Dec 04 15:55:02 crc kubenswrapper[4878]: E1204 15:55:02.923907 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901\": container with ID starting with 71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901 not found: ID does not exist" containerID="71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.923960 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901"} err="failed to get container status \"71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901\": rpc error: code = NotFound desc = could not find container \"71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901\": container with ID starting with 71362d5846acc9c3d2707f20628466d5497c54d995b29117d49730998332b901 not found: ID does not exist" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.952895 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw92z\" (UniqueName: \"kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z\") pod \"ac85cc2d-2dde-4497-aa4d-92603905d41a\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.953084 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb\") pod \"ac85cc2d-2dde-4497-aa4d-92603905d41a\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.953114 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config\") pod \"ac85cc2d-2dde-4497-aa4d-92603905d41a\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.953261 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc\") pod \"ac85cc2d-2dde-4497-aa4d-92603905d41a\" (UID: \"ac85cc2d-2dde-4497-aa4d-92603905d41a\") " Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.961241 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z" (OuterVolumeSpecName: "kube-api-access-zw92z") pod "ac85cc2d-2dde-4497-aa4d-92603905d41a" (UID: "ac85cc2d-2dde-4497-aa4d-92603905d41a"). InnerVolumeSpecName "kube-api-access-zw92z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.999145 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config" (OuterVolumeSpecName: "config") pod "ac85cc2d-2dde-4497-aa4d-92603905d41a" (UID: "ac85cc2d-2dde-4497-aa4d-92603905d41a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:02 crc kubenswrapper[4878]: I1204 15:55:02.999839 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac85cc2d-2dde-4497-aa4d-92603905d41a" (UID: "ac85cc2d-2dde-4497-aa4d-92603905d41a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.003678 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac85cc2d-2dde-4497-aa4d-92603905d41a" (UID: "ac85cc2d-2dde-4497-aa4d-92603905d41a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.055650 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw92z\" (UniqueName: \"kubernetes.io/projected/ac85cc2d-2dde-4497-aa4d-92603905d41a-kube-api-access-zw92z\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.055730 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.055740 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.055750 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac85cc2d-2dde-4497-aa4d-92603905d41a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.210145 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.217754 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-4cww7"] Dec 04 15:55:03 crc kubenswrapper[4878]: I1204 15:55:03.540748 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.216624 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.381717 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.381887 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.381982 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj72g\" (UniqueName: \"kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.382016 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.382053 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.382076 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.382140 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices\") pod \"947bbc4c-f673-433d-bc78-4411fea88516\" (UID: \"947bbc4c-f673-433d-bc78-4411fea88516\") " Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.383005 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.383727 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.388102 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g" (OuterVolumeSpecName: "kube-api-access-nj72g") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "kube-api-access-nj72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.390495 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.405165 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts" (OuterVolumeSpecName: "scripts") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.411689 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.414448 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "947bbc4c-f673-433d-bc78-4411fea88516" (UID: "947bbc4c-f673-433d-bc78-4411fea88516"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484758 4878 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/947bbc4c-f673-433d-bc78-4411fea88516-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484804 4878 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484817 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj72g\" (UniqueName: \"kubernetes.io/projected/947bbc4c-f673-433d-bc78-4411fea88516-kube-api-access-nj72g\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484828 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484838 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484847 4878 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/947bbc4c-f673-433d-bc78-4411fea88516-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.484856 4878 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/947bbc4c-f673-433d-bc78-4411fea88516-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.894199 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-55hsg" event={"ID":"947bbc4c-f673-433d-bc78-4411fea88516","Type":"ContainerDied","Data":"992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56"} Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.894267 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992e092c4daa2c12eef24fcc0573aee9d839540261ad83e0681bc647efbc9a56" Dec 04 15:55:04 crc kubenswrapper[4878]: I1204 15:55:04.894326 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-55hsg" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.190411 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" path="/var/lib/kubelet/pods/ac85cc2d-2dde-4497-aa4d-92603905d41a/volumes" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.385964 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.388904 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-cvfgn" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.654899 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-qt5xl-config-5b8j7"] Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655427 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa6917aa-17e3-4bff-b2e9-c5101344b039" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655453 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa6917aa-17e3-4bff-b2e9-c5101344b039" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655474 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2737dc43-bf57-49c6-ab53-06ba48bfc80a" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655483 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="2737dc43-bf57-49c6-ab53-06ba48bfc80a" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655498 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dda6827-59a8-4cdf-9446-555d17a5793a" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655506 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dda6827-59a8-4cdf-9446-555d17a5793a" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655513 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="init" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655522 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="init" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655554 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55939a1d-54f3-4c84-a201-dc129636438b" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655561 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="55939a1d-54f3-4c84-a201-dc129636438b" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655591 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44154d84-cb0b-4894-9b50-4a93fafc5136" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655598 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="44154d84-cb0b-4894-9b50-4a93fafc5136" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655607 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="dnsmasq-dns" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655613 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="dnsmasq-dns" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655626 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947bbc4c-f673-433d-bc78-4411fea88516" containerName="swift-ring-rebalance" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655632 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="947bbc4c-f673-433d-bc78-4411fea88516" containerName="swift-ring-rebalance" Dec 04 15:55:05 crc kubenswrapper[4878]: E1204 15:55:05.655643 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2aa82d2a-1f5f-4689-bb38-fc00144e2174" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655649 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="2aa82d2a-1f5f-4689-bb38-fc00144e2174" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655845 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="55939a1d-54f3-4c84-a201-dc129636438b" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655863 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="2aa82d2a-1f5f-4689-bb38-fc00144e2174" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655892 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac85cc2d-2dde-4497-aa4d-92603905d41a" containerName="dnsmasq-dns" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655905 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="2737dc43-bf57-49c6-ab53-06ba48bfc80a" containerName="mariadb-database-create" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655912 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="947bbc4c-f673-433d-bc78-4411fea88516" containerName="swift-ring-rebalance" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655922 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="44154d84-cb0b-4894-9b50-4a93fafc5136" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655930 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa6917aa-17e3-4bff-b2e9-c5101344b039" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.655942 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dda6827-59a8-4cdf-9446-555d17a5793a" containerName="mariadb-account-create-update" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.656737 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.662379 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.670110 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qt5xl-config-5b8j7"] Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.684316 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-qt5xl" podUID="76972b0d-60b4-427a-83fa-69d53c8c1e64" containerName="ovn-controller" probeResult="failure" output=< Dec 04 15:55:05 crc kubenswrapper[4878]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 04 15:55:05 crc kubenswrapper[4878]: > Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.713027 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-vlv79"] Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.720969 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.721089 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vlv79"] Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.725490 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xxz6r" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.725794 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.733580 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chz9k\" (UniqueName: \"kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.733724 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.733801 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.733923 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734002 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734084 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734167 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734242 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.734420 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w92bs\" (UniqueName: \"kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.835450 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.835757 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.835925 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.836203 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.836937 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.837055 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w92bs\" (UniqueName: \"kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.836152 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.836153 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.836315 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.837691 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chz9k\" (UniqueName: \"kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.838067 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.838301 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.838247 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.838940 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.839219 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.841357 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.842383 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.843459 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.855766 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chz9k\" (UniqueName: \"kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k\") pod \"ovn-controller-qt5xl-config-5b8j7\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.858674 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w92bs\" (UniqueName: \"kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs\") pod \"glance-db-sync-vlv79\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:05 crc kubenswrapper[4878]: I1204 15:55:05.978911 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.043219 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.476162 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-qt5xl-config-5b8j7"] Dec 04 15:55:06 crc kubenswrapper[4878]: W1204 15:55:06.479698 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bc3bff2_25c5_459e_bb2b_44b9e7aabb1c.slice/crio-c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1 WatchSource:0}: Error finding container c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1: Status 404 returned error can't find the container with id c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1 Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.680891 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-vlv79"] Dec 04 15:55:06 crc kubenswrapper[4878]: W1204 15:55:06.686863 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15b32fab_0a73_417d_af80_9b289421b529.slice/crio-fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16 WatchSource:0}: Error finding container fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16: Status 404 returned error can't find the container with id fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16 Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.913744 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl-config-5b8j7" event={"ID":"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c","Type":"ContainerStarted","Data":"db3eb9e4dc4ddebc00b939ce927137183c7e14fcf68138b3497f67a064b3969c"} Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.913800 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl-config-5b8j7" event={"ID":"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c","Type":"ContainerStarted","Data":"c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1"} Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.915733 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vlv79" event={"ID":"15b32fab-0a73-417d-af80-9b289421b529","Type":"ContainerStarted","Data":"fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16"} Dec 04 15:55:06 crc kubenswrapper[4878]: I1204 15:55:06.938669 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-qt5xl-config-5b8j7" podStartSLOduration=1.93863772 podStartE2EDuration="1.93863772s" podCreationTimestamp="2025-12-04 15:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:06.933007189 +0000 UTC m=+1150.895544145" watchObservedRunningTime="2025-12-04 15:55:06.93863772 +0000 UTC m=+1150.901174676" Dec 04 15:55:07 crc kubenswrapper[4878]: I1204 15:55:07.931972 4878 generic.go:334] "Generic (PLEG): container finished" podID="3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" containerID="db3eb9e4dc4ddebc00b939ce927137183c7e14fcf68138b3497f67a064b3969c" exitCode=0 Dec 04 15:55:07 crc kubenswrapper[4878]: I1204 15:55:07.932024 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl-config-5b8j7" event={"ID":"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c","Type":"ContainerDied","Data":"db3eb9e4dc4ddebc00b939ce927137183c7e14fcf68138b3497f67a064b3969c"} Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.280928 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.317497 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chz9k\" (UniqueName: \"kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.317979 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.318053 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.318111 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.318206 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.318248 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run\") pod \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\" (UID: \"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c\") " Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.319258 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run" (OuterVolumeSpecName: "var-run") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.320974 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.321058 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.322195 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.322692 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts" (OuterVolumeSpecName: "scripts") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.328244 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k" (OuterVolumeSpecName: "kube-api-access-chz9k") pod "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" (UID: "3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c"). InnerVolumeSpecName "kube-api-access-chz9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419842 4878 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419893 4878 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419906 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419915 4878 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-run\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419924 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chz9k\" (UniqueName: \"kubernetes.io/projected/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-kube-api-access-chz9k\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.419935 4878 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.952231 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-qt5xl-config-5b8j7" event={"ID":"3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c","Type":"ContainerDied","Data":"c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1"} Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.952844 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c12a42a20c61b18e21df131558a6f96b2da6d3c1b9588f4210bb95ce1c36c3e1" Dec 04 15:55:09 crc kubenswrapper[4878]: I1204 15:55:09.952349 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-qt5xl-config-5b8j7" Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.060608 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-qt5xl-config-5b8j7"] Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.082474 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-qt5xl-config-5b8j7"] Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.540749 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.548237 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/10b4321d-097d-4ab2-8014-63c5b80e6839-etc-swift\") pod \"swift-storage-0\" (UID: \"10b4321d-097d-4ab2-8014-63c5b80e6839\") " pod="openstack/swift-storage-0" Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.667744 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-qt5xl" Dec 04 15:55:10 crc kubenswrapper[4878]: I1204 15:55:10.753952 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 04 15:55:11 crc kubenswrapper[4878]: I1204 15:55:11.210635 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" path="/var/lib/kubelet/pods/3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c/volumes" Dec 04 15:55:11 crc kubenswrapper[4878]: I1204 15:55:11.594292 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 04 15:55:11 crc kubenswrapper[4878]: I1204 15:55:11.979593 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"51b7c6bdc68a8a380c5dada469192d0d1d019c380787a539fbe7b69cb0207056"} Dec 04 15:55:21 crc kubenswrapper[4878]: E1204 15:55:21.561554 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 04 15:55:21 crc kubenswrapper[4878]: E1204 15:55:21.562568 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w92bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-vlv79_openstack(15b32fab-0a73-417d-af80-9b289421b529): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:55:21 crc kubenswrapper[4878]: E1204 15:55:21.564036 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-vlv79" podUID="15b32fab-0a73-417d-af80-9b289421b529" Dec 04 15:55:22 crc kubenswrapper[4878]: I1204 15:55:22.291776 4878 generic.go:334] "Generic (PLEG): container finished" podID="f17e1868-a868-47aa-8e98-e60203d8295f" containerID="aa30f83cdc9e3faac13e73e9ab0e7b955edf28e4e8f407845d5f126ae6eafd7f" exitCode=0 Dec 04 15:55:22 crc kubenswrapper[4878]: I1204 15:55:22.292093 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerDied","Data":"aa30f83cdc9e3faac13e73e9ab0e7b955edf28e4e8f407845d5f126ae6eafd7f"} Dec 04 15:55:22 crc kubenswrapper[4878]: I1204 15:55:22.296795 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"57dc5f973d50246b046b74e83bc72bca5485611a15911ae5aa1f7bb3e1ce4940"} Dec 04 15:55:22 crc kubenswrapper[4878]: E1204 15:55:22.299290 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-vlv79" podUID="15b32fab-0a73-417d-af80-9b289421b529" Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.324481 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerStarted","Data":"697b76773e156792e6a95d3f9f134a20e04ccd0a91e6dbc0064ece01cdda3891"} Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.325717 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.326807 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"da79e907b615c5cc5ce9cb596d9ae64e76b15092fe1eb3ef31afed176d81a821"} Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.326845 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"4b5cb4091a51695cfa473eb559de1840d431ab12aaca975ad7b241933390405b"} Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.326855 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"10c1b6dd8606a9e38391a8fdb39176cbc19cc9bfd08d38726c01a6ad7b3108b4"} Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.330077 4878 generic.go:334] "Generic (PLEG): container finished" podID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerID="e92152e572870aaebeb8c9f7a1ceb576d574fd9f0c0c2b871d6ac359655a8c97" exitCode=0 Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.330121 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerDied","Data":"e92152e572870aaebeb8c9f7a1ceb576d574fd9f0c0c2b871d6ac359655a8c97"} Dec 04 15:55:23 crc kubenswrapper[4878]: I1204 15:55:23.374288 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.972513573 podStartE2EDuration="1m23.374245117s" podCreationTimestamp="2025-12-04 15:54:00 +0000 UTC" firstStartedPulling="2025-12-04 15:54:02.290924771 +0000 UTC m=+1086.253461727" lastFinishedPulling="2025-12-04 15:54:39.692656315 +0000 UTC m=+1123.655193271" observedRunningTime="2025-12-04 15:55:23.362668665 +0000 UTC m=+1167.325205621" watchObservedRunningTime="2025-12-04 15:55:23.374245117 +0000 UTC m=+1167.336782153" Dec 04 15:55:24 crc kubenswrapper[4878]: I1204 15:55:24.342777 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerStarted","Data":"af8f0d0429f9116de445491ff3e1f6a451a3a12b27b61b49708dd8def9270850"} Dec 04 15:55:24 crc kubenswrapper[4878]: I1204 15:55:24.343714 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:55:29 crc kubenswrapper[4878]: I1204 15:55:29.477103 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"e85aa9979b7750eec85385655da367a30bed5356e88566304d4dc826354570f2"} Dec 04 15:55:29 crc kubenswrapper[4878]: I1204 15:55:29.477485 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"ea6b2aed04ee334055173fe0ae4f7b1b23e66aeb061b5b0994054e3d3c82eccc"} Dec 04 15:55:30 crc kubenswrapper[4878]: I1204 15:55:30.488987 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"7fc4cc7abcdca3db3ab240be85714ce956376b7463b5b75a01a39c06ac4d61c0"} Dec 04 15:55:30 crc kubenswrapper[4878]: I1204 15:55:30.489469 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"126f168fce699a707074edaac37e35568a6e584f60c9bd35acd41e7356c22041"} Dec 04 15:55:32 crc kubenswrapper[4878]: I1204 15:55:32.518080 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"19a994944fcb646205cdbb248da51c43d0dd99efc4d2531df72fc29d38404c27"} Dec 04 15:55:32 crc kubenswrapper[4878]: I1204 15:55:32.518613 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"780bec33c1d505e6d7b820faef1307de799552702f33c12d66f532f33d6884a7"} Dec 04 15:55:33 crc kubenswrapper[4878]: I1204 15:55:33.221272 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371943.63355 podStartE2EDuration="1m33.221225825s" podCreationTimestamp="2025-12-04 15:54:00 +0000 UTC" firstStartedPulling="2025-12-04 15:54:03.35365195 +0000 UTC m=+1087.316188906" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:24.368816534 +0000 UTC m=+1168.331353490" watchObservedRunningTime="2025-12-04 15:55:33.221225825 +0000 UTC m=+1177.183762791" Dec 04 15:55:33 crc kubenswrapper[4878]: I1204 15:55:33.547520 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"f9baa27e4c8263e40383cc73d23eacef41d1541dae7a806324c592d1739d5c17"} Dec 04 15:55:33 crc kubenswrapper[4878]: I1204 15:55:33.547575 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"199dcf7ee788370d4a721dbc0166a997c5cbd6a0e068f4a101ffbd10fd87539f"} Dec 04 15:55:34 crc kubenswrapper[4878]: I1204 15:55:34.559570 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"c3560b2c6bc1dd225da168dc0504f78c65dfeb12f916864529b046cc498d8b43"} Dec 04 15:55:34 crc kubenswrapper[4878]: I1204 15:55:34.560511 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"c22dcf3477a6dee733fb8fd7f058c94c7d9462889c44956c18103e2d8dda69e3"} Dec 04 15:55:34 crc kubenswrapper[4878]: I1204 15:55:34.560526 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"10b4321d-097d-4ab2-8014-63c5b80e6839","Type":"ContainerStarted","Data":"00c3269aae884d4bcc8a0b604cbdd74e5cefca3d3e4d02d4b46c38377e8a20fc"} Dec 04 15:55:34 crc kubenswrapper[4878]: I1204 15:55:34.630936 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.855009761 podStartE2EDuration="57.630915749s" podCreationTimestamp="2025-12-04 15:54:37 +0000 UTC" firstStartedPulling="2025-12-04 15:55:11.619310273 +0000 UTC m=+1155.581847229" lastFinishedPulling="2025-12-04 15:55:31.395216261 +0000 UTC m=+1175.357753217" observedRunningTime="2025-12-04 15:55:34.629788611 +0000 UTC m=+1178.592325567" watchObservedRunningTime="2025-12-04 15:55:34.630915749 +0000 UTC m=+1178.593452705" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.149114 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:55:35 crc kubenswrapper[4878]: E1204 15:55:35.149521 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" containerName="ovn-config" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.149542 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" containerName="ovn-config" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.149789 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc3bff2-25c5-459e-bb2b-44b9e7aabb1c" containerName="ovn-config" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.150838 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.153776 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.164977 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.259659 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.259736 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.259761 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.259800 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.260541 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.260692 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362416 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362517 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362566 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362631 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362661 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.362685 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.364016 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.364033 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.364432 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.365325 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.367240 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.385778 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv\") pod \"dnsmasq-dns-764c5664d7-8gp98\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.473271 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.587715 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vlv79" event={"ID":"15b32fab-0a73-417d-af80-9b289421b529","Type":"ContainerStarted","Data":"1ec0c51e8a15c821a870901d8a423d3e757cf95f5ed282c484c8093268a98863"} Dec 04 15:55:35 crc kubenswrapper[4878]: I1204 15:55:35.612468 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-vlv79" podStartSLOduration=3.4015223949999998 podStartE2EDuration="30.612444017s" podCreationTimestamp="2025-12-04 15:55:05 +0000 UTC" firstStartedPulling="2025-12-04 15:55:06.690254612 +0000 UTC m=+1150.652791558" lastFinishedPulling="2025-12-04 15:55:33.901176224 +0000 UTC m=+1177.863713180" observedRunningTime="2025-12-04 15:55:35.60303611 +0000 UTC m=+1179.565573066" watchObservedRunningTime="2025-12-04 15:55:35.612444017 +0000 UTC m=+1179.574980973" Dec 04 15:55:36 crc kubenswrapper[4878]: I1204 15:55:36.210650 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:55:36 crc kubenswrapper[4878]: I1204 15:55:36.598942 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" event={"ID":"f250d56b-91ec-4897-88b0-d33f4fbbec3e","Type":"ContainerStarted","Data":"f891e110eabb6737b0e669e115873c9302e62ce5007f45316eafccfd4a857c6d"} Dec 04 15:55:37 crc kubenswrapper[4878]: I1204 15:55:37.609453 4878 generic.go:334] "Generic (PLEG): container finished" podID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerID="ba37dc569707819e9d94abde67ff19a0c3782794d789d2195f01d3528da29451" exitCode=0 Dec 04 15:55:37 crc kubenswrapper[4878]: I1204 15:55:37.609530 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" event={"ID":"f250d56b-91ec-4897-88b0-d33f4fbbec3e","Type":"ContainerDied","Data":"ba37dc569707819e9d94abde67ff19a0c3782794d789d2195f01d3528da29451"} Dec 04 15:55:38 crc kubenswrapper[4878]: I1204 15:55:38.622433 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" event={"ID":"f250d56b-91ec-4897-88b0-d33f4fbbec3e","Type":"ContainerStarted","Data":"f64d46cd85579bd5608d1977a8554f858fe2845fe380c0c3dbb3dfb662b25610"} Dec 04 15:55:38 crc kubenswrapper[4878]: I1204 15:55:38.623104 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:38 crc kubenswrapper[4878]: I1204 15:55:38.651694 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" podStartSLOduration=3.651653884 podStartE2EDuration="3.651653884s" podCreationTimestamp="2025-12-04 15:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:38.64712219 +0000 UTC m=+1182.609659146" watchObservedRunningTime="2025-12-04 15:55:38.651653884 +0000 UTC m=+1182.614190840" Dec 04 15:55:41 crc kubenswrapper[4878]: I1204 15:55:41.602583 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.057715 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rcch2"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.059813 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.073256 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rcch2"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.152766 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-g7jwn"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.154235 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.172250 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5b99-account-create-update-tp2qw"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.175962 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.182450 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.194618 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g7jwn"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.210347 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.210431 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.210507 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5b99-account-create-update-tp2qw"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.243169 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.274941 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-684a-account-create-update-pwjmt"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.276416 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.280843 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.297142 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-684a-account-create-update-pwjmt"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.312688 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.312804 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.312835 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.312891 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.312919 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkn5k\" (UniqueName: \"kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.313689 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.313824 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnmvq\" (UniqueName: \"kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.339328 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh\") pod \"cinder-db-create-rcch2\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.415655 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.415716 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.415749 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkn5k\" (UniqueName: \"kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.415776 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnmvq\" (UniqueName: \"kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.415987 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.416055 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhzt\" (UniqueName: \"kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.416310 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ksg95"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.418405 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.418897 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.419354 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.421274 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.429943 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.430288 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.430605 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l74bm" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.430719 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.473178 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnmvq\" (UniqueName: \"kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq\") pod \"barbican-db-create-g7jwn\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.485451 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.502061 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mn59d"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.503795 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.506302 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkn5k\" (UniqueName: \"kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k\") pod \"barbican-5b99-account-create-update-tp2qw\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.510864 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mn59d"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.517424 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.517505 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tthkb\" (UniqueName: \"kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.517548 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.517604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhzt\" (UniqueName: \"kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.517638 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.519285 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.542186 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5f07-account-create-update-cbntx"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.543437 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.549126 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.551413 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhzt\" (UniqueName: \"kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt\") pod \"cinder-684a-account-create-update-pwjmt\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.570857 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ksg95"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.589056 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f07-account-create-update-cbntx"] Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.598411 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.621105 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tprnr\" (UniqueName: \"kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.621217 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.621285 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tthkb\" (UniqueName: \"kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.621344 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.621367 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.626662 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.725778 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.725853 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tprnr\" (UniqueName: \"kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.726020 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.726077 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57r2d\" (UniqueName: \"kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.729074 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.811709 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.835374 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.835510 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57r2d\" (UniqueName: \"kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.836435 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.851221 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.853921 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tprnr\" (UniqueName: \"kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr\") pod \"neutron-db-create-mn59d\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.861551 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tthkb\" (UniqueName: \"kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb\") pod \"keystone-db-sync-ksg95\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.871690 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57r2d\" (UniqueName: \"kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d\") pod \"neutron-5f07-account-create-update-cbntx\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.872836 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:42 crc kubenswrapper[4878]: I1204 15:55:42.883042 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:43 crc kubenswrapper[4878]: I1204 15:55:43.051506 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:43.564963 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rcch2"] Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:43.597073 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5b99-account-create-update-tp2qw"] Dec 04 15:55:45 crc kubenswrapper[4878]: W1204 15:55:43.601620 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1e84f13_7faf_4acb_bee8_57e817842089.slice/crio-3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3 WatchSource:0}: Error finding container 3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3: Status 404 returned error can't find the container with id 3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3 Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:43.707627 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rcch2" event={"ID":"f80534b0-7801-4e06-a5a4-54c6cc79fe4c","Type":"ContainerStarted","Data":"e8cc7e52bd2ac359022091778cb7f2385b2bd2c9283fa5b051e8c9a56560c8e6"} Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:43.709517 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5b99-account-create-update-tp2qw" event={"ID":"c1e84f13-7faf-4acb-bee8-57e817842089","Type":"ContainerStarted","Data":"3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3"} Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:43.751583 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-g7jwn"] Dec 04 15:55:45 crc kubenswrapper[4878]: W1204 15:55:43.768097 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd96ded51_89ea_4af0_916f_5f63afd77cfa.slice/crio-36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210 WatchSource:0}: Error finding container 36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210: Status 404 returned error can't find the container with id 36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210 Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:44.723126 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7jwn" event={"ID":"d96ded51-89ea-4af0-916f-5f63afd77cfa","Type":"ContainerStarted","Data":"36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210"} Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:45.476109 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:45.549583 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:45.550027 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-wt5jb" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="dnsmasq-dns" containerID="cri-o://8d304619dda8e5845b94774d4bb834b7688f3f9bcf47949d9acfbbddc368b96c" gracePeriod=10 Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:45.736707 4878 generic.go:334] "Generic (PLEG): container finished" podID="15b32fab-0a73-417d-af80-9b289421b529" containerID="1ec0c51e8a15c821a870901d8a423d3e757cf95f5ed282c484c8093268a98863" exitCode=0 Dec 04 15:55:45 crc kubenswrapper[4878]: I1204 15:55:45.736777 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vlv79" event={"ID":"15b32fab-0a73-417d-af80-9b289421b529","Type":"ContainerDied","Data":"1ec0c51e8a15c821a870901d8a423d3e757cf95f5ed282c484c8093268a98863"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.267103 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mn59d"] Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.284328 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5f07-account-create-update-cbntx"] Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.312552 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-684a-account-create-update-pwjmt"] Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.370933 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ksg95"] Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.758368 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f07-account-create-update-cbntx" event={"ID":"7ef8e917-5db2-471b-b047-6d61d46162bc","Type":"ContainerStarted","Data":"20570fedcf45de1edbfb9693df04faaef5a5b795f93669e66bee90bad7975f72"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.762415 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksg95" event={"ID":"9d64f9f6-f276-43d2-b298-95d7a51d7247","Type":"ContainerStarted","Data":"02b9ed2b1c6623a1a9e32c53a7df3beb873878246728084012e5331b3f8620e8"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.768160 4878 generic.go:334] "Generic (PLEG): container finished" podID="c1e84f13-7faf-4acb-bee8-57e817842089" containerID="69b4f9b0a939474106302584e690ee1ee8affe015477338afaf80e128960078f" exitCode=0 Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.768862 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5b99-account-create-update-tp2qw" event={"ID":"c1e84f13-7faf-4acb-bee8-57e817842089","Type":"ContainerDied","Data":"69b4f9b0a939474106302584e690ee1ee8affe015477338afaf80e128960078f"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.778903 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-684a-account-create-update-pwjmt" event={"ID":"08cfc7e4-1de9-400b-8c2c-c225aabbae69","Type":"ContainerStarted","Data":"35a8d457bc1f3d434265ac86b3be7262e2dfe667945316145937cc03114533f2"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.788965 4878 generic.go:334] "Generic (PLEG): container finished" podID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerID="8d304619dda8e5845b94774d4bb834b7688f3f9bcf47949d9acfbbddc368b96c" exitCode=0 Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.789058 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wt5jb" event={"ID":"315550ac-d3ca-4736-abad-f1cb130fcc4a","Type":"ContainerDied","Data":"8d304619dda8e5845b94774d4bb834b7688f3f9bcf47949d9acfbbddc368b96c"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.789156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-wt5jb" event={"ID":"315550ac-d3ca-4736-abad-f1cb130fcc4a","Type":"ContainerDied","Data":"3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.789175 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2a16071916ec261ab77a3c0211baf5b13d07739bae6c54fd9137071b30ac4d" Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.791909 4878 generic.go:334] "Generic (PLEG): container finished" podID="f80534b0-7801-4e06-a5a4-54c6cc79fe4c" containerID="0d64ae47fdde18622d3f0442b21f13a323375fb0ab95eb0bbfed3344c4fb177f" exitCode=0 Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.792000 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rcch2" event={"ID":"f80534b0-7801-4e06-a5a4-54c6cc79fe4c","Type":"ContainerDied","Data":"0d64ae47fdde18622d3f0442b21f13a323375fb0ab95eb0bbfed3344c4fb177f"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.794089 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mn59d" event={"ID":"dc0dafad-a741-434b-9b7d-72a301c16d46","Type":"ContainerStarted","Data":"daaa9d71b45c576761b675ba31a7985551b42bee3d9dd99dd8ed7751839ec848"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.797347 4878 generic.go:334] "Generic (PLEG): container finished" podID="d96ded51-89ea-4af0-916f-5f63afd77cfa" containerID="5da97d3802ab0720ac319f116112f8bae51550befc2c6698f9837a3fca7eb9ce" exitCode=0 Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.797544 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7jwn" event={"ID":"d96ded51-89ea-4af0-916f-5f63afd77cfa","Type":"ContainerDied","Data":"5da97d3802ab0720ac319f116112f8bae51550befc2c6698f9837a3fca7eb9ce"} Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.819933 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.831580 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-684a-account-create-update-pwjmt" podStartSLOduration=4.831553224 podStartE2EDuration="4.831553224s" podCreationTimestamp="2025-12-04 15:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:46.827272116 +0000 UTC m=+1190.789809082" watchObservedRunningTime="2025-12-04 15:55:46.831553224 +0000 UTC m=+1190.794090180" Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.901578 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc\") pod \"315550ac-d3ca-4736-abad-f1cb130fcc4a\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.901763 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb\") pod \"315550ac-d3ca-4736-abad-f1cb130fcc4a\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.901832 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt929\" (UniqueName: \"kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929\") pod \"315550ac-d3ca-4736-abad-f1cb130fcc4a\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.902088 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config\") pod \"315550ac-d3ca-4736-abad-f1cb130fcc4a\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.902205 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb\") pod \"315550ac-d3ca-4736-abad-f1cb130fcc4a\" (UID: \"315550ac-d3ca-4736-abad-f1cb130fcc4a\") " Dec 04 15:55:46 crc kubenswrapper[4878]: I1204 15:55:46.949648 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929" (OuterVolumeSpecName: "kube-api-access-jt929") pod "315550ac-d3ca-4736-abad-f1cb130fcc4a" (UID: "315550ac-d3ca-4736-abad-f1cb130fcc4a"). InnerVolumeSpecName "kube-api-access-jt929". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.011657 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt929\" (UniqueName: \"kubernetes.io/projected/315550ac-d3ca-4736-abad-f1cb130fcc4a-kube-api-access-jt929\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.016070 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "315550ac-d3ca-4736-abad-f1cb130fcc4a" (UID: "315550ac-d3ca-4736-abad-f1cb130fcc4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.063048 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "315550ac-d3ca-4736-abad-f1cb130fcc4a" (UID: "315550ac-d3ca-4736-abad-f1cb130fcc4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.078692 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config" (OuterVolumeSpecName: "config") pod "315550ac-d3ca-4736-abad-f1cb130fcc4a" (UID: "315550ac-d3ca-4736-abad-f1cb130fcc4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.079437 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mn59d" podStartSLOduration=5.079414528 podStartE2EDuration="5.079414528s" podCreationTimestamp="2025-12-04 15:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:47.015023826 +0000 UTC m=+1190.977560792" watchObservedRunningTime="2025-12-04 15:55:47.079414528 +0000 UTC m=+1191.041951484" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.096036 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "315550ac-d3ca-4736-abad-f1cb130fcc4a" (UID: "315550ac-d3ca-4736-abad-f1cb130fcc4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.115255 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.115316 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.115339 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.115352 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/315550ac-d3ca-4736-abad-f1cb130fcc4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.529109 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.626192 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle\") pod \"15b32fab-0a73-417d-af80-9b289421b529\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.626275 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data\") pod \"15b32fab-0a73-417d-af80-9b289421b529\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.626334 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w92bs\" (UniqueName: \"kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs\") pod \"15b32fab-0a73-417d-af80-9b289421b529\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.626489 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data\") pod \"15b32fab-0a73-417d-af80-9b289421b529\" (UID: \"15b32fab-0a73-417d-af80-9b289421b529\") " Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.631901 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs" (OuterVolumeSpecName: "kube-api-access-w92bs") pod "15b32fab-0a73-417d-af80-9b289421b529" (UID: "15b32fab-0a73-417d-af80-9b289421b529"). InnerVolumeSpecName "kube-api-access-w92bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.632053 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "15b32fab-0a73-417d-af80-9b289421b529" (UID: "15b32fab-0a73-417d-af80-9b289421b529"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.656757 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15b32fab-0a73-417d-af80-9b289421b529" (UID: "15b32fab-0a73-417d-af80-9b289421b529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.685118 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data" (OuterVolumeSpecName: "config-data") pod "15b32fab-0a73-417d-af80-9b289421b529" (UID: "15b32fab-0a73-417d-af80-9b289421b529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.729034 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.729085 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.729100 4878 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/15b32fab-0a73-417d-af80-9b289421b529-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.729113 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w92bs\" (UniqueName: \"kubernetes.io/projected/15b32fab-0a73-417d-af80-9b289421b529-kube-api-access-w92bs\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.816462 4878 generic.go:334] "Generic (PLEG): container finished" podID="dc0dafad-a741-434b-9b7d-72a301c16d46" containerID="0e1ec4269a66e30a56708646cfd6218ab77564995f125aae2e0943b0af60fc0f" exitCode=0 Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.816567 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mn59d" event={"ID":"dc0dafad-a741-434b-9b7d-72a301c16d46","Type":"ContainerDied","Data":"0e1ec4269a66e30a56708646cfd6218ab77564995f125aae2e0943b0af60fc0f"} Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.819099 4878 generic.go:334] "Generic (PLEG): container finished" podID="7ef8e917-5db2-471b-b047-6d61d46162bc" containerID="315fafdd44eef789649674294b3c53407252fb909e522008db059c2c0d3bb0c2" exitCode=0 Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.819186 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f07-account-create-update-cbntx" event={"ID":"7ef8e917-5db2-471b-b047-6d61d46162bc","Type":"ContainerDied","Data":"315fafdd44eef789649674294b3c53407252fb909e522008db059c2c0d3bb0c2"} Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.821634 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-vlv79" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.821628 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-vlv79" event={"ID":"15b32fab-0a73-417d-af80-9b289421b529","Type":"ContainerDied","Data":"fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16"} Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.821787 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe70b98ce7290b8d7864ae8f882d421ac41d87eb95bee263cb8fecb10e552c16" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.823982 4878 generic.go:334] "Generic (PLEG): container finished" podID="08cfc7e4-1de9-400b-8c2c-c225aabbae69" containerID="1cc03292fc04da1e82c691fd5819818c607331bf4d7edd89b5d2f4c90863622e" exitCode=0 Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.824063 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-684a-account-create-update-pwjmt" event={"ID":"08cfc7e4-1de9-400b-8c2c-c225aabbae69","Type":"ContainerDied","Data":"1cc03292fc04da1e82c691fd5819818c607331bf4d7edd89b5d2f4c90863622e"} Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.824232 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-wt5jb" Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.879719 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:55:47 crc kubenswrapper[4878]: I1204 15:55:47.896960 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-wt5jb"] Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.262179 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:55:48 crc kubenswrapper[4878]: E1204 15:55:48.263018 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="dnsmasq-dns" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.263031 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="dnsmasq-dns" Dec 04 15:55:48 crc kubenswrapper[4878]: E1204 15:55:48.263046 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b32fab-0a73-417d-af80-9b289421b529" containerName="glance-db-sync" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.263053 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b32fab-0a73-417d-af80-9b289421b529" containerName="glance-db-sync" Dec 04 15:55:48 crc kubenswrapper[4878]: E1204 15:55:48.263092 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="init" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.263100 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="init" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.263263 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b32fab-0a73-417d-af80-9b289421b529" containerName="glance-db-sync" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.263296 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" containerName="dnsmasq-dns" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.264734 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.290717 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.302719 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349359 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptvgk\" (UniqueName: \"kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349434 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349511 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349586 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349690 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.349753 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.455242 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts\") pod \"d96ded51-89ea-4af0-916f-5f63afd77cfa\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.455508 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnmvq\" (UniqueName: \"kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq\") pod \"d96ded51-89ea-4af0-916f-5f63afd77cfa\" (UID: \"d96ded51-89ea-4af0-916f-5f63afd77cfa\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.456954 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d96ded51-89ea-4af0-916f-5f63afd77cfa" (UID: "d96ded51-89ea-4af0-916f-5f63afd77cfa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.463850 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq" (OuterVolumeSpecName: "kube-api-access-jnmvq") pod "d96ded51-89ea-4af0-916f-5f63afd77cfa" (UID: "d96ded51-89ea-4af0-916f-5f63afd77cfa"). InnerVolumeSpecName "kube-api-access-jnmvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471152 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471297 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptvgk\" (UniqueName: \"kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471352 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471535 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471753 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.471861 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.472071 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnmvq\" (UniqueName: \"kubernetes.io/projected/d96ded51-89ea-4af0-916f-5f63afd77cfa-kube-api-access-jnmvq\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.472101 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d96ded51-89ea-4af0-916f-5f63afd77cfa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.472908 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.473258 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.474630 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.474685 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.475017 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.499446 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptvgk\" (UniqueName: \"kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk\") pod \"dnsmasq-dns-74f6bcbc87-5h7ng\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.562517 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.602490 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.608788 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.678994 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts\") pod \"c1e84f13-7faf-4acb-bee8-57e817842089\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.679167 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkn5k\" (UniqueName: \"kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k\") pod \"c1e84f13-7faf-4acb-bee8-57e817842089\" (UID: \"c1e84f13-7faf-4acb-bee8-57e817842089\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.679216 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts\") pod \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.679291 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh\") pod \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\" (UID: \"f80534b0-7801-4e06-a5a4-54c6cc79fe4c\") " Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.681757 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1e84f13-7faf-4acb-bee8-57e817842089" (UID: "c1e84f13-7faf-4acb-bee8-57e817842089"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.682546 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f80534b0-7801-4e06-a5a4-54c6cc79fe4c" (UID: "f80534b0-7801-4e06-a5a4-54c6cc79fe4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.782146 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.782185 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1e84f13-7faf-4acb-bee8-57e817842089-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.786997 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh" (OuterVolumeSpecName: "kube-api-access-jfdzh") pod "f80534b0-7801-4e06-a5a4-54c6cc79fe4c" (UID: "f80534b0-7801-4e06-a5a4-54c6cc79fe4c"). InnerVolumeSpecName "kube-api-access-jfdzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.794566 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k" (OuterVolumeSpecName: "kube-api-access-mkn5k") pod "c1e84f13-7faf-4acb-bee8-57e817842089" (UID: "c1e84f13-7faf-4acb-bee8-57e817842089"). InnerVolumeSpecName "kube-api-access-mkn5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.849972 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rcch2" event={"ID":"f80534b0-7801-4e06-a5a4-54c6cc79fe4c","Type":"ContainerDied","Data":"e8cc7e52bd2ac359022091778cb7f2385b2bd2c9283fa5b051e8c9a56560c8e6"} Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.850040 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8cc7e52bd2ac359022091778cb7f2385b2bd2c9283fa5b051e8c9a56560c8e6" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.850129 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rcch2" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.858923 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-g7jwn" event={"ID":"d96ded51-89ea-4af0-916f-5f63afd77cfa","Type":"ContainerDied","Data":"36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210"} Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.858988 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36140a50b8847d99a89aa9ad7ba52107ee38b5354c3b6c33c51646e4e0f6a210" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.859143 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-g7jwn" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.867446 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5b99-account-create-update-tp2qw" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.869156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5b99-account-create-update-tp2qw" event={"ID":"c1e84f13-7faf-4acb-bee8-57e817842089","Type":"ContainerDied","Data":"3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3"} Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.869348 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd5043a3c11f784e6ec56dd5bd72e00d424436b265f3f411436e2e0646b9bf3" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.883578 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfdzh\" (UniqueName: \"kubernetes.io/projected/f80534b0-7801-4e06-a5a4-54c6cc79fe4c-kube-api-access-jfdzh\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:48 crc kubenswrapper[4878]: I1204 15:55:48.883627 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkn5k\" (UniqueName: \"kubernetes.io/projected/c1e84f13-7faf-4acb-bee8-57e817842089-kube-api-access-mkn5k\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.194511 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="315550ac-d3ca-4736-abad-f1cb130fcc4a" path="/var/lib/kubelet/pods/315550ac-d3ca-4736-abad-f1cb130fcc4a/volumes" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.525536 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:55:49 crc kubenswrapper[4878]: W1204 15:55:49.543087 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84ae3851_e9c9_4643_97c2_937ad6b572f9.slice/crio-7ed14bbe98c6f243dcfa202b9e34fa2d8cd34e142ad9bc7a121d782350f1c3d1 WatchSource:0}: Error finding container 7ed14bbe98c6f243dcfa202b9e34fa2d8cd34e142ad9bc7a121d782350f1c3d1: Status 404 returned error can't find the container with id 7ed14bbe98c6f243dcfa202b9e34fa2d8cd34e142ad9bc7a121d782350f1c3d1 Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.593367 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.605529 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.634090 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.706683 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts\") pod \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.707439 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhzt\" (UniqueName: \"kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt\") pod \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\" (UID: \"08cfc7e4-1de9-400b-8c2c-c225aabbae69\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.707486 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57r2d\" (UniqueName: \"kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d\") pod \"7ef8e917-5db2-471b-b047-6d61d46162bc\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.707511 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts\") pod \"dc0dafad-a741-434b-9b7d-72a301c16d46\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.707824 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08cfc7e4-1de9-400b-8c2c-c225aabbae69" (UID: "08cfc7e4-1de9-400b-8c2c-c225aabbae69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.708988 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc0dafad-a741-434b-9b7d-72a301c16d46" (UID: "dc0dafad-a741-434b-9b7d-72a301c16d46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.710239 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts\") pod \"7ef8e917-5db2-471b-b047-6d61d46162bc\" (UID: \"7ef8e917-5db2-471b-b047-6d61d46162bc\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.710568 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tprnr\" (UniqueName: \"kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr\") pod \"dc0dafad-a741-434b-9b7d-72a301c16d46\" (UID: \"dc0dafad-a741-434b-9b7d-72a301c16d46\") " Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.711111 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ef8e917-5db2-471b-b047-6d61d46162bc" (UID: "7ef8e917-5db2-471b-b047-6d61d46162bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.711539 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08cfc7e4-1de9-400b-8c2c-c225aabbae69-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.711567 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0dafad-a741-434b-9b7d-72a301c16d46-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.711577 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ef8e917-5db2-471b-b047-6d61d46162bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.713220 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d" (OuterVolumeSpecName: "kube-api-access-57r2d") pod "7ef8e917-5db2-471b-b047-6d61d46162bc" (UID: "7ef8e917-5db2-471b-b047-6d61d46162bc"). InnerVolumeSpecName "kube-api-access-57r2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.713423 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt" (OuterVolumeSpecName: "kube-api-access-xhhzt") pod "08cfc7e4-1de9-400b-8c2c-c225aabbae69" (UID: "08cfc7e4-1de9-400b-8c2c-c225aabbae69"). InnerVolumeSpecName "kube-api-access-xhhzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.715097 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr" (OuterVolumeSpecName: "kube-api-access-tprnr") pod "dc0dafad-a741-434b-9b7d-72a301c16d46" (UID: "dc0dafad-a741-434b-9b7d-72a301c16d46"). InnerVolumeSpecName "kube-api-access-tprnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.813166 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhzt\" (UniqueName: \"kubernetes.io/projected/08cfc7e4-1de9-400b-8c2c-c225aabbae69-kube-api-access-xhhzt\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.813214 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57r2d\" (UniqueName: \"kubernetes.io/projected/7ef8e917-5db2-471b-b047-6d61d46162bc-kube-api-access-57r2d\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.813225 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tprnr\" (UniqueName: \"kubernetes.io/projected/dc0dafad-a741-434b-9b7d-72a301c16d46-kube-api-access-tprnr\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.887228 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mn59d" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.888629 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mn59d" event={"ID":"dc0dafad-a741-434b-9b7d-72a301c16d46","Type":"ContainerDied","Data":"daaa9d71b45c576761b675ba31a7985551b42bee3d9dd99dd8ed7751839ec848"} Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.888685 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daaa9d71b45c576761b675ba31a7985551b42bee3d9dd99dd8ed7751839ec848" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.892364 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5f07-account-create-update-cbntx" event={"ID":"7ef8e917-5db2-471b-b047-6d61d46162bc","Type":"ContainerDied","Data":"20570fedcf45de1edbfb9693df04faaef5a5b795f93669e66bee90bad7975f72"} Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.892401 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20570fedcf45de1edbfb9693df04faaef5a5b795f93669e66bee90bad7975f72" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.892404 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5f07-account-create-update-cbntx" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.902092 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-684a-account-create-update-pwjmt" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.902529 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-684a-account-create-update-pwjmt" event={"ID":"08cfc7e4-1de9-400b-8c2c-c225aabbae69","Type":"ContainerDied","Data":"35a8d457bc1f3d434265ac86b3be7262e2dfe667945316145937cc03114533f2"} Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.902692 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a8d457bc1f3d434265ac86b3be7262e2dfe667945316145937cc03114533f2" Dec 04 15:55:49 crc kubenswrapper[4878]: I1204 15:55:49.905187 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" event={"ID":"84ae3851-e9c9-4643-97c2-937ad6b572f9","Type":"ContainerStarted","Data":"7ed14bbe98c6f243dcfa202b9e34fa2d8cd34e142ad9bc7a121d782350f1c3d1"} Dec 04 15:55:50 crc kubenswrapper[4878]: I1204 15:55:50.918698 4878 generic.go:334] "Generic (PLEG): container finished" podID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerID="9d14b5a5428368e5015ab72e4653b365bb5ce923fd21e5a0f5fb49319f7ee616" exitCode=0 Dec 04 15:55:50 crc kubenswrapper[4878]: I1204 15:55:50.918766 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" event={"ID":"84ae3851-e9c9-4643-97c2-937ad6b572f9","Type":"ContainerDied","Data":"9d14b5a5428368e5015ab72e4653b365bb5ce923fd21e5a0f5fb49319f7ee616"} Dec 04 15:55:53 crc kubenswrapper[4878]: I1204 15:55:53.957345 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" event={"ID":"84ae3851-e9c9-4643-97c2-937ad6b572f9","Type":"ContainerStarted","Data":"edb078c752fb42df809c9ee9c06ee6e96e28dc9834f8c46bc5d5c2c457810b17"} Dec 04 15:55:53 crc kubenswrapper[4878]: I1204 15:55:53.958352 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:53 crc kubenswrapper[4878]: I1204 15:55:53.960974 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksg95" event={"ID":"9d64f9f6-f276-43d2-b298-95d7a51d7247","Type":"ContainerStarted","Data":"d29a6a166de080e8e0ab9a10be01628c48794d45a404f3a043dd3c5d929066bf"} Dec 04 15:55:53 crc kubenswrapper[4878]: I1204 15:55:53.985312 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podStartSLOduration=5.9852786590000004 podStartE2EDuration="5.985278659s" podCreationTimestamp="2025-12-04 15:55:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:55:53.980169641 +0000 UTC m=+1197.942706667" watchObservedRunningTime="2025-12-04 15:55:53.985278659 +0000 UTC m=+1197.947815615" Dec 04 15:55:54 crc kubenswrapper[4878]: I1204 15:55:54.006143 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ksg95" podStartSLOduration=5.27633367 podStartE2EDuration="12.006118464s" podCreationTimestamp="2025-12-04 15:55:42 +0000 UTC" firstStartedPulling="2025-12-04 15:55:46.405115141 +0000 UTC m=+1190.367652097" lastFinishedPulling="2025-12-04 15:55:53.134899925 +0000 UTC m=+1197.097436891" observedRunningTime="2025-12-04 15:55:54.002459222 +0000 UTC m=+1197.964996188" watchObservedRunningTime="2025-12-04 15:55:54.006118464 +0000 UTC m=+1197.968655410" Dec 04 15:55:56 crc kubenswrapper[4878]: I1204 15:55:56.993046 4878 generic.go:334] "Generic (PLEG): container finished" podID="9d64f9f6-f276-43d2-b298-95d7a51d7247" containerID="d29a6a166de080e8e0ab9a10be01628c48794d45a404f3a043dd3c5d929066bf" exitCode=0 Dec 04 15:55:56 crc kubenswrapper[4878]: I1204 15:55:56.993169 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksg95" event={"ID":"9d64f9f6-f276-43d2-b298-95d7a51d7247","Type":"ContainerDied","Data":"d29a6a166de080e8e0ab9a10be01628c48794d45a404f3a043dd3c5d929066bf"} Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.399922 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.500195 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle\") pod \"9d64f9f6-f276-43d2-b298-95d7a51d7247\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.500265 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tthkb\" (UniqueName: \"kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb\") pod \"9d64f9f6-f276-43d2-b298-95d7a51d7247\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.500497 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data\") pod \"9d64f9f6-f276-43d2-b298-95d7a51d7247\" (UID: \"9d64f9f6-f276-43d2-b298-95d7a51d7247\") " Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.508378 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb" (OuterVolumeSpecName: "kube-api-access-tthkb") pod "9d64f9f6-f276-43d2-b298-95d7a51d7247" (UID: "9d64f9f6-f276-43d2-b298-95d7a51d7247"). InnerVolumeSpecName "kube-api-access-tthkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.535382 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d64f9f6-f276-43d2-b298-95d7a51d7247" (UID: "9d64f9f6-f276-43d2-b298-95d7a51d7247"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.551622 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data" (OuterVolumeSpecName: "config-data") pod "9d64f9f6-f276-43d2-b298-95d7a51d7247" (UID: "9d64f9f6-f276-43d2-b298-95d7a51d7247"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.603850 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.604013 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d64f9f6-f276-43d2-b298-95d7a51d7247-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.604032 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tthkb\" (UniqueName: \"kubernetes.io/projected/9d64f9f6-f276-43d2-b298-95d7a51d7247-kube-api-access-tthkb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.604080 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.679730 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:55:58 crc kubenswrapper[4878]: I1204 15:55:58.680182 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="dnsmasq-dns" containerID="cri-o://f64d46cd85579bd5608d1977a8554f858fe2845fe380c0c3dbb3dfb662b25610" gracePeriod=10 Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.058319 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksg95" event={"ID":"9d64f9f6-f276-43d2-b298-95d7a51d7247","Type":"ContainerDied","Data":"02b9ed2b1c6623a1a9e32c53a7df3beb873878246728084012e5331b3f8620e8"} Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.058366 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02b9ed2b1c6623a1a9e32c53a7df3beb873878246728084012e5331b3f8620e8" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.058466 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksg95" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.062437 4878 generic.go:334] "Generic (PLEG): container finished" podID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerID="f64d46cd85579bd5608d1977a8554f858fe2845fe380c0c3dbb3dfb662b25610" exitCode=0 Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.062487 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" event={"ID":"f250d56b-91ec-4897-88b0-d33f4fbbec3e","Type":"ContainerDied","Data":"f64d46cd85579bd5608d1977a8554f858fe2845fe380c0c3dbb3dfb662b25610"} Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.129112 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.251830 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4k8h9"] Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252445 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d96ded51-89ea-4af0-916f-5f63afd77cfa" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252466 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d96ded51-89ea-4af0-916f-5f63afd77cfa" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252487 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef8e917-5db2-471b-b047-6d61d46162bc" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252498 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef8e917-5db2-471b-b047-6d61d46162bc" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252514 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="dnsmasq-dns" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252522 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="dnsmasq-dns" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252537 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e84f13-7faf-4acb-bee8-57e817842089" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252545 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e84f13-7faf-4acb-bee8-57e817842089" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252564 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d64f9f6-f276-43d2-b298-95d7a51d7247" containerName="keystone-db-sync" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252573 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d64f9f6-f276-43d2-b298-95d7a51d7247" containerName="keystone-db-sync" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252583 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f80534b0-7801-4e06-a5a4-54c6cc79fe4c" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252591 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f80534b0-7801-4e06-a5a4-54c6cc79fe4c" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252611 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="init" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252620 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="init" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252633 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0dafad-a741-434b-9b7d-72a301c16d46" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252642 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0dafad-a741-434b-9b7d-72a301c16d46" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: E1204 15:55:59.252663 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cfc7e4-1de9-400b-8c2c-c225aabbae69" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252671 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cfc7e4-1de9-400b-8c2c-c225aabbae69" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252906 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cfc7e4-1de9-400b-8c2c-c225aabbae69" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252927 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d64f9f6-f276-43d2-b298-95d7a51d7247" containerName="keystone-db-sync" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252947 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0dafad-a741-434b-9b7d-72a301c16d46" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252959 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e84f13-7faf-4acb-bee8-57e817842089" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252970 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f80534b0-7801-4e06-a5a4-54c6cc79fe4c" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252980 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="d96ded51-89ea-4af0-916f-5f63afd77cfa" containerName="mariadb-database-create" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.252993 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef8e917-5db2-471b-b047-6d61d46162bc" containerName="mariadb-account-create-update" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.253008 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" containerName="dnsmasq-dns" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.253933 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.259150 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.259445 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.259642 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.259715 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l74bm" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.261062 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.279678 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.282018 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.302378 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4k8h9"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317553 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317607 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317637 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317717 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317765 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.317840 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0\") pod \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\" (UID: \"f250d56b-91ec-4897-88b0-d33f4fbbec3e\") " Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318107 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318141 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kgdq\" (UniqueName: \"kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318200 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318262 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318287 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318347 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318377 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318405 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318444 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318481 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318512 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.318574 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55q2\" (UniqueName: \"kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.319170 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.332121 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv" (OuterVolumeSpecName: "kube-api-access-xszrv") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "kube-api-access-xszrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419642 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419718 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419750 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419773 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419799 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419822 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419844 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419894 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55q2\" (UniqueName: \"kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419919 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419939 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kgdq\" (UniqueName: \"kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419962 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.419983 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.420039 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xszrv\" (UniqueName: \"kubernetes.io/projected/f250d56b-91ec-4897-88b0-d33f4fbbec3e-kube-api-access-xszrv\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.424821 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.427565 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.428282 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.430614 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.434836 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.435728 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.435760 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.436281 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.440898 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.443641 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.463752 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.473245 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.474371 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kgdq\" (UniqueName: \"kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq\") pod \"keystone-bootstrap-4k8h9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.474629 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55q2\" (UniqueName: \"kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2\") pod \"dnsmasq-dns-847c4cc679-2gmgs\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.493855 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-sljcs"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.504458 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.514804 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.515088 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.515226 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kqkbb" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.518126 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.523031 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.523157 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.523233 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.528289 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.528560 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.528984 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmv4\" (UniqueName: \"kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.529244 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.529266 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.529282 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.535778 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.537387 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sljcs"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.553488 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config" (OuterVolumeSpecName: "config") pod "f250d56b-91ec-4897-88b0-d33f4fbbec3e" (UID: "f250d56b-91ec-4897-88b0-d33f4fbbec3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.556118 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.557809 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.568622 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.568911 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-bxr47" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.569049 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.569163 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.582093 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.588908 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.632009 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.637691 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.637776 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.637843 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.637922 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.637976 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638003 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638037 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb2cm\" (UniqueName: \"kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638089 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmv4\" (UniqueName: \"kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638148 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638223 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638279 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638403 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.638415 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f250d56b-91ec-4897-88b0-d33f4fbbec3e-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.644940 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.648206 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.648681 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.658322 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.673941 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.717082 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmv4\" (UniqueName: \"kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4\") pod \"cinder-db-sync-sljcs\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " pod="openstack/cinder-db-sync-sljcs" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.724582 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-vm2hn"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.753681 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.764568 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.764708 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.764857 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.769646 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.769933 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5zfv7" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.782337 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb2cm\" (UniqueName: \"kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.782550 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.783786 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.784375 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.784429 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.786445 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-4pqmg"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.808824 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.809832 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.831810 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.832241 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.833452 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jst5w" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.880524 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb2cm\" (UniqueName: \"kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm\") pod \"horizon-7b9b4f5745-7mt44\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888419 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888469 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888528 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888551 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgzqw\" (UniqueName: \"kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888579 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888601 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8v4k\" (UniqueName: \"kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.888907 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vm2hn"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.902013 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4pqmg"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.911951 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.913638 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.921797 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.949705 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.965492 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.967464 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.973454 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xxz6r" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.977245 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.977544 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.978156 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.983115 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.989889 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991327 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991379 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991443 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991485 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgzqw\" (UniqueName: \"kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991520 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.991553 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8v4k\" (UniqueName: \"kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:55:59 crc kubenswrapper[4878]: I1204 15:55:59.999429 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.000221 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.001467 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.009901 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-874tf"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.011277 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.012111 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sljcs" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.021054 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.022536 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.022588 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.026306 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ftlgh" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.031834 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.037213 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.040488 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.041635 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgzqw\" (UniqueName: \"kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw\") pod \"neutron-db-sync-4pqmg\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.051784 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8v4k\" (UniqueName: \"kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k\") pod \"barbican-db-sync-vm2hn\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.073793 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-874tf"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101259 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101319 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101365 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101398 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101438 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101476 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5rz\" (UniqueName: \"kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101529 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101588 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101630 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f2fd\" (UniqueName: \"kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.101650 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfv7\" (UniqueName: \"kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102134 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102180 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102199 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102223 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102243 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102274 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102301 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102318 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.102336 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.103957 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.107830 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.114575 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.119420 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" event={"ID":"f250d56b-91ec-4897-88b0-d33f4fbbec3e","Type":"ContainerDied","Data":"f891e110eabb6737b0e669e115873c9302e62ce5007f45316eafccfd4a857c6d"} Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.119494 4878 scope.go:117] "RemoveContainer" containerID="f64d46cd85579bd5608d1977a8554f858fe2845fe380c0c3dbb3dfb662b25610" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.119754 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-8gp98" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.120714 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.121033 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.178744 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.201420 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.203768 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.203850 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.203912 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.203944 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.203972 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204012 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs2fx\" (UniqueName: \"kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204046 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q5rz\" (UniqueName: \"kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204075 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204142 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204176 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204207 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204263 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204294 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204339 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204371 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204397 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f2fd\" (UniqueName: \"kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204424 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zfv7\" (UniqueName: \"kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204471 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpkkp\" (UniqueName: \"kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204508 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204544 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204580 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204607 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204640 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204663 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204696 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204723 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204753 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204777 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204808 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204835 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204899 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.204937 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.205138 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.205415 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.206347 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.206657 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.207530 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.208698 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.209035 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.209509 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.210657 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.211327 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.211506 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.214652 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.214944 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.215815 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.219251 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.221602 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-8gp98"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.223045 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.229222 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.236152 4878 scope.go:117] "RemoveContainer" containerID="ba37dc569707819e9d94abde67ff19a0c3782794d789d2195f01d3528da29451" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.236584 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zfv7\" (UniqueName: \"kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7\") pod \"horizon-7787668ff9-nlh6p\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.239296 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f2fd\" (UniqueName: \"kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.242602 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.258076 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q5rz\" (UniqueName: \"kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz\") pod \"dnsmasq-dns-785d8bcb8c-2m666\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.283022 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309502 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309588 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs2fx\" (UniqueName: \"kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309633 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309653 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309694 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309718 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309750 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309830 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.309942 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpkkp\" (UniqueName: \"kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.310035 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.310102 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.310204 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.310248 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.310990 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.315631 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.317027 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.317650 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.317790 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.323008 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.323578 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.325484 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.325650 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.326088 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.340905 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpkkp\" (UniqueName: \"kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp\") pod \"placement-db-sync-874tf\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.344370 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.345243 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.363111 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs2fx\" (UniqueName: \"kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.385400 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4k8h9"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.405642 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.440217 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-874tf" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.442596 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.471332 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.689106 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.738767 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.744011 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.750921 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.751122 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.762277 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.833922 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840573 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840706 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840729 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840833 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840858 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.840917 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrtc\" (UniqueName: \"kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.845651 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.846252 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.846306 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.847653 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-sljcs"] Dec 04 15:56:00 crc kubenswrapper[4878]: W1204 15:56:00.884504 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64e9a31b_b17d_4589_a5fe_41f7ea2973b8.slice/crio-94c29e7d06e78ffd67e338498eabae48c63e2e3e07da42a24d6beb15f4d98135 WatchSource:0}: Error finding container 94c29e7d06e78ffd67e338498eabae48c63e2e3e07da42a24d6beb15f4d98135: Status 404 returned error can't find the container with id 94c29e7d06e78ffd67e338498eabae48c63e2e3e07da42a24d6beb15f4d98135 Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953366 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953422 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953459 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrtc\" (UniqueName: \"kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953516 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953544 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953569 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.953590 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.955409 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.962966 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:00 crc kubenswrapper[4878]: I1204 15:56:00.983622 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.002861 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.003580 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.009388 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrtc\" (UniqueName: \"kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.009701 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts\") pod \"ceilometer-0\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.077925 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.117142 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-vm2hn"] Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.177110 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4k8h9" event={"ID":"f9f2f41d-8c30-404e-846b-a2a041621fd9","Type":"ContainerStarted","Data":"baad344afdb7b6e07070b93a7a2d8c0dfd78c7738e2f805734aa2910e5b98a87"} Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.227274 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f250d56b-91ec-4897-88b0-d33f4fbbec3e" path="/var/lib/kubelet/pods/f250d56b-91ec-4897-88b0-d33f4fbbec3e/volumes" Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.228685 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sljcs" event={"ID":"b7b4a412-5105-473d-8037-1b43c331046b","Type":"ContainerStarted","Data":"6433df386b188f72439f0f42b6dff7a8d31309a6c646800a05b6ae14d2c38918"} Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.228722 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerStarted","Data":"94c29e7d06e78ffd67e338498eabae48c63e2e3e07da42a24d6beb15f4d98135"} Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.228739 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" event={"ID":"4b5334f5-9a66-4256-a413-befc5f23b01b","Type":"ContainerStarted","Data":"323d3c79a6141b33c54fedbdf3fdcf8fad791aca2706ab6f7b59eac6f97a784d"} Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.408852 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-4pqmg"] Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.693607 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.861140 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:01 crc kubenswrapper[4878]: W1204 15:56:01.899761 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5ba953_d18d_4990_85a5_1b40492af0c4.slice/crio-eb2ee95eaa40009719211d015692dccb43319ac19eb51a621c1b03bef64d1f1b WatchSource:0}: Error finding container eb2ee95eaa40009719211d015692dccb43319ac19eb51a621c1b03bef64d1f1b: Status 404 returned error can't find the container with id eb2ee95eaa40009719211d015692dccb43319ac19eb51a621c1b03bef64d1f1b Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.941436 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:56:01 crc kubenswrapper[4878]: I1204 15:56:01.976713 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:01 crc kubenswrapper[4878]: W1204 15:56:01.982374 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb71ceafd_10e6_4b24_8021_a62932b44acb.slice/crio-df7902fa2041a42c6ec4eb5b55eb3bf81ab8fe67bb99c5a45b9071301ec5f7e0 WatchSource:0}: Error finding container df7902fa2041a42c6ec4eb5b55eb3bf81ab8fe67bb99c5a45b9071301ec5f7e0: Status 404 returned error can't find the container with id df7902fa2041a42c6ec4eb5b55eb3bf81ab8fe67bb99c5a45b9071301ec5f7e0 Dec 04 15:56:02 crc kubenswrapper[4878]: W1204 15:56:02.025077 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fd4c7e4_ea3d_4d93_853b_a575b29d06c3.slice/crio-e4b62c6306809511578de24d96740a78fe98503a55d6f29da5f64b4c0e1ed13a WatchSource:0}: Error finding container e4b62c6306809511578de24d96740a78fe98503a55d6f29da5f64b4c0e1ed13a: Status 404 returned error can't find the container with id e4b62c6306809511578de24d96740a78fe98503a55d6f29da5f64b4c0e1ed13a Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.095067 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-874tf"] Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.118222 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.205726 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vm2hn" event={"ID":"d7a20413-55ed-48d6-98c3-0bd98368deaa","Type":"ContainerStarted","Data":"7891db646ed00c497fca6b9a3e7b8d80be2b544210ba562bc27e448792a9c8ca"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.210778 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" event={"ID":"b71ceafd-10e6-4b24-8021-a62932b44acb","Type":"ContainerStarted","Data":"df7902fa2041a42c6ec4eb5b55eb3bf81ab8fe67bb99c5a45b9071301ec5f7e0"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.215439 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4k8h9" event={"ID":"f9f2f41d-8c30-404e-846b-a2a041621fd9","Type":"ContainerStarted","Data":"4c53099853e21956133269693abdf9a9ee5aace0f0fcce7c6a2caaf83304be86"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.247708 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerStarted","Data":"e4b62c6306809511578de24d96740a78fe98503a55d6f29da5f64b4c0e1ed13a"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.249487 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4k8h9" podStartSLOduration=3.249463041 podStartE2EDuration="3.249463041s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:02.243909781 +0000 UTC m=+1206.206446757" watchObservedRunningTime="2025-12-04 15:56:02.249463041 +0000 UTC m=+1206.211999997" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.252856 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-874tf" event={"ID":"9e69f1bb-0019-4fee-b04b-d4e6319c61db","Type":"ContainerStarted","Data":"e0e360bb81eca7a5259c7768f204db3be16be2e27bd5651a8daec36a5cdfd976"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.258159 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerStarted","Data":"06d00462f632a4353398957f180c5c5fda1817539563b96f17d242b440b1c7a0"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.263502 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerStarted","Data":"eb2ee95eaa40009719211d015692dccb43319ac19eb51a621c1b03bef64d1f1b"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.295806 4878 generic.go:334] "Generic (PLEG): container finished" podID="4b5334f5-9a66-4256-a413-befc5f23b01b" containerID="681a492137c30dedc065c538420e983bbd7851529f5b0f8f5d2db2d5c86d1b31" exitCode=0 Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.295970 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" event={"ID":"4b5334f5-9a66-4256-a413-befc5f23b01b","Type":"ContainerDied","Data":"681a492137c30dedc065c538420e983bbd7851529f5b0f8f5d2db2d5c86d1b31"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.380269 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pqmg" event={"ID":"67596249-6134-4ecd-8c9f-865a51c1cbfa","Type":"ContainerStarted","Data":"baec7959ee2d1c9532b754e896f320bdb3feb7e0859f858faea0917807e28192"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.380667 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pqmg" event={"ID":"67596249-6134-4ecd-8c9f-865a51c1cbfa","Type":"ContainerStarted","Data":"95de88d3d70995391b6f2a5069c93b8fdc22e99bf8f3ed10a645533d0ed921d2"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.390753 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerStarted","Data":"c9a2c697b0494970ff2f0e0d2ad06259d7ff35ee510a7a46977960f654857384"} Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.404002 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-4pqmg" podStartSLOduration=3.4039771930000002 podStartE2EDuration="3.403977193s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:02.398151017 +0000 UTC m=+1206.360687993" watchObservedRunningTime="2025-12-04 15:56:02.403977193 +0000 UTC m=+1206.366514149" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.824580 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.920685 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.920816 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.920955 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.921031 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.921108 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.921175 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j55q2\" (UniqueName: \"kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2\") pod \"4b5334f5-9a66-4256-a413-befc5f23b01b\" (UID: \"4b5334f5-9a66-4256-a413-befc5f23b01b\") " Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.953560 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2" (OuterVolumeSpecName: "kube-api-access-j55q2") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "kube-api-access-j55q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.973354 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.981303 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config" (OuterVolumeSpecName: "config") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.986789 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.994979 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:02 crc kubenswrapper[4878]: I1204 15:56:02.995221 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4b5334f5-9a66-4256-a413-befc5f23b01b" (UID: "4b5334f5-9a66-4256-a413-befc5f23b01b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026130 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026195 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026213 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026227 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026240 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b5334f5-9a66-4256-a413-befc5f23b01b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.026254 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j55q2\" (UniqueName: \"kubernetes.io/projected/4b5334f5-9a66-4256-a413-befc5f23b01b-kube-api-access-j55q2\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.426692 4878 generic.go:334] "Generic (PLEG): container finished" podID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerID="2557c90b407cf8b6842d50e669c2a0d817cb46b0f22b127ff8704e777f2b29e1" exitCode=0 Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.426839 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" event={"ID":"b71ceafd-10e6-4b24-8021-a62932b44acb","Type":"ContainerDied","Data":"2557c90b407cf8b6842d50e669c2a0d817cb46b0f22b127ff8704e777f2b29e1"} Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.451194 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.451445 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-2gmgs" event={"ID":"4b5334f5-9a66-4256-a413-befc5f23b01b","Type":"ContainerDied","Data":"323d3c79a6141b33c54fedbdf3fdcf8fad791aca2706ab6f7b59eac6f97a784d"} Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.451561 4878 scope.go:117] "RemoveContainer" containerID="681a492137c30dedc065c538420e983bbd7851529f5b0f8f5d2db2d5c86d1b31" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.562424 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.673318 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.706752 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-2gmgs"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.719983 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.731244 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:56:03 crc kubenswrapper[4878]: E1204 15:56:03.731764 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b5334f5-9a66-4256-a413-befc5f23b01b" containerName="init" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.731780 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b5334f5-9a66-4256-a413-befc5f23b01b" containerName="init" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.731999 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b5334f5-9a66-4256-a413-befc5f23b01b" containerName="init" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.740472 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.741843 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.752106 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.760106 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.847027 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.847138 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.847184 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4jr\" (UniqueName: \"kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.847227 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.847251 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.956966 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.957138 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.957215 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4jr\" (UniqueName: \"kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.957309 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.957355 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.958934 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.959731 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.960225 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.968240 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:03 crc kubenswrapper[4878]: I1204 15:56:03.986147 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4jr\" (UniqueName: \"kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr\") pod \"horizon-59844c56b5-mjb67\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.112680 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.573631 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerStarted","Data":"2784a347257a95161eb37034bc6dea5798d9f1a15224865cae57aaeb278a4756"} Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.593848 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" event={"ID":"b71ceafd-10e6-4b24-8021-a62932b44acb","Type":"ContainerStarted","Data":"2f59f63b291dbc3fed045193b3b5b02b48a9fc4010c4b2251ddb2458db909754"} Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.594113 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.598794 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerStarted","Data":"b004ea6ea1dd63a395fa369b8d6da91815c46d05154be673d35e8c97f31c3c01"} Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.637158 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" podStartSLOduration=5.637137173 podStartE2EDuration="5.637137173s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:04.632054805 +0000 UTC m=+1208.594591761" watchObservedRunningTime="2025-12-04 15:56:04.637137173 +0000 UTC m=+1208.599674129" Dec 04 15:56:04 crc kubenswrapper[4878]: I1204 15:56:04.827429 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:56:05 crc kubenswrapper[4878]: I1204 15:56:05.195763 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b5334f5-9a66-4256-a413-befc5f23b01b" path="/var/lib/kubelet/pods/4b5334f5-9a66-4256-a413-befc5f23b01b/volumes" Dec 04 15:56:05 crc kubenswrapper[4878]: I1204 15:56:05.618843 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerStarted","Data":"6b15b701116cb2a057d141b9baa6c631e14c3c554ff8aa37896a23b425119c0e"} Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.656423 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerStarted","Data":"88711bc78c2b316630d1605a419fe9bbd39dabbb000bedfaf7fd9e7d499528ec"} Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.657451 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-log" containerID="cri-o://2784a347257a95161eb37034bc6dea5798d9f1a15224865cae57aaeb278a4756" gracePeriod=30 Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.657534 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-httpd" containerID="cri-o://88711bc78c2b316630d1605a419fe9bbd39dabbb000bedfaf7fd9e7d499528ec" gracePeriod=30 Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.668698 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerStarted","Data":"369f4c660183972a4fa5ef34658c25ec1fe7ca6351d43507c5acf31bd44022d5"} Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.669128 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-log" containerID="cri-o://b004ea6ea1dd63a395fa369b8d6da91815c46d05154be673d35e8c97f31c3c01" gracePeriod=30 Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.669174 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-httpd" containerID="cri-o://369f4c660183972a4fa5ef34658c25ec1fe7ca6351d43507c5acf31bd44022d5" gracePeriod=30 Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.686571 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.686512877 podStartE2EDuration="8.686512877s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:07.680355582 +0000 UTC m=+1211.642892558" watchObservedRunningTime="2025-12-04 15:56:07.686512877 +0000 UTC m=+1211.649049843" Dec 04 15:56:07 crc kubenswrapper[4878]: I1204 15:56:07.732654 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.732627929 podStartE2EDuration="8.732627929s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:07.70965155 +0000 UTC m=+1211.672188506" watchObservedRunningTime="2025-12-04 15:56:07.732627929 +0000 UTC m=+1211.695164885" Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.687684 4878 generic.go:334] "Generic (PLEG): container finished" podID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerID="369f4c660183972a4fa5ef34658c25ec1fe7ca6351d43507c5acf31bd44022d5" exitCode=0 Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.688297 4878 generic.go:334] "Generic (PLEG): container finished" podID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerID="b004ea6ea1dd63a395fa369b8d6da91815c46d05154be673d35e8c97f31c3c01" exitCode=143 Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.687768 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerDied","Data":"369f4c660183972a4fa5ef34658c25ec1fe7ca6351d43507c5acf31bd44022d5"} Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.688392 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerDied","Data":"b004ea6ea1dd63a395fa369b8d6da91815c46d05154be673d35e8c97f31c3c01"} Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.692188 4878 generic.go:334] "Generic (PLEG): container finished" podID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerID="88711bc78c2b316630d1605a419fe9bbd39dabbb000bedfaf7fd9e7d499528ec" exitCode=0 Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.692245 4878 generic.go:334] "Generic (PLEG): container finished" podID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerID="2784a347257a95161eb37034bc6dea5798d9f1a15224865cae57aaeb278a4756" exitCode=143 Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.692219 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerDied","Data":"88711bc78c2b316630d1605a419fe9bbd39dabbb000bedfaf7fd9e7d499528ec"} Dec 04 15:56:08 crc kubenswrapper[4878]: I1204 15:56:08.692316 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerDied","Data":"2784a347257a95161eb37034bc6dea5798d9f1a15224865cae57aaeb278a4756"} Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.409138 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.490934 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.491353 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" containerID="cri-o://edb078c752fb42df809c9ee9c06ee6e96e28dc9834f8c46bc5d5c2c457810b17" gracePeriod=10 Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.613654 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.674184 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.675939 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.681714 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.704592 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.725052 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.737883 4878 generic.go:334] "Generic (PLEG): container finished" podID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerID="edb078c752fb42df809c9ee9c06ee6e96e28dc9834f8c46bc5d5c2c457810b17" exitCode=0 Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.737987 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" event={"ID":"84ae3851-e9c9-4643-97c2-937ad6b572f9","Type":"ContainerDied","Data":"edb078c752fb42df809c9ee9c06ee6e96e28dc9834f8c46bc5d5c2c457810b17"} Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.748790 4878 generic.go:334] "Generic (PLEG): container finished" podID="f9f2f41d-8c30-404e-846b-a2a041621fd9" containerID="4c53099853e21956133269693abdf9a9ee5aace0f0fcce7c6a2caaf83304be86" exitCode=0 Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.748865 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4k8h9" event={"ID":"f9f2f41d-8c30-404e-846b-a2a041621fd9","Type":"ContainerDied","Data":"4c53099853e21956133269693abdf9a9ee5aace0f0fcce7c6a2caaf83304be86"} Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.835524 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c56cbf696-wj6zc"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.837357 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.867164 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869300 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86826\" (UniqueName: \"kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869483 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869667 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869778 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869824 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.869950 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.887817 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c56cbf696-wj6zc"] Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.970666 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-tls-certs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971118 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971143 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5pl\" (UniqueName: \"kubernetes.io/projected/63307580-b46f-421f-bbf5-52eafde58f6c-kube-api-access-2t5pl\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971174 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971195 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971219 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-secret-key\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971245 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.971268 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-config-data\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.972212 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973030 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973088 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86826\" (UniqueName: \"kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973136 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-combined-ca-bundle\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973163 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973192 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-scripts\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.973216 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63307580-b46f-421f-bbf5-52eafde58f6c-logs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.976029 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.976165 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.982991 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.983441 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.983500 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:10 crc kubenswrapper[4878]: I1204 15:56:10.992974 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86826\" (UniqueName: \"kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826\") pod \"horizon-db576cdd4-fp9zg\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.005458 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.075041 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t5pl\" (UniqueName: \"kubernetes.io/projected/63307580-b46f-421f-bbf5-52eafde58f6c-kube-api-access-2t5pl\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.075150 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-secret-key\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.075199 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-config-data\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.079174 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-config-data\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.081317 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-combined-ca-bundle\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.081393 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-scripts\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.081429 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63307580-b46f-421f-bbf5-52eafde58f6c-logs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.081532 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-tls-certs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.082823 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/63307580-b46f-421f-bbf5-52eafde58f6c-scripts\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.083669 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63307580-b46f-421f-bbf5-52eafde58f6c-logs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.086267 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-tls-certs\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.087195 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-combined-ca-bundle\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.088623 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/63307580-b46f-421f-bbf5-52eafde58f6c-horizon-secret-key\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.098495 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t5pl\" (UniqueName: \"kubernetes.io/projected/63307580-b46f-421f-bbf5-52eafde58f6c-kube-api-access-2t5pl\") pod \"horizon-6c56cbf696-wj6zc\" (UID: \"63307580-b46f-421f-bbf5-52eafde58f6c\") " pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:11 crc kubenswrapper[4878]: I1204 15:56:11.177759 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:13 crc kubenswrapper[4878]: I1204 15:56:13.603439 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.394385 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562015 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562442 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562584 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562637 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562663 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kgdq\" (UniqueName: \"kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.562793 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data\") pod \"f9f2f41d-8c30-404e-846b-a2a041621fd9\" (UID: \"f9f2f41d-8c30-404e-846b-a2a041621fd9\") " Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.570342 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq" (OuterVolumeSpecName: "kube-api-access-6kgdq") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "kube-api-access-6kgdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.570756 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.571427 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts" (OuterVolumeSpecName: "scripts") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.575331 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.592737 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data" (OuterVolumeSpecName: "config-data") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.634768 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9f2f41d-8c30-404e-846b-a2a041621fd9" (UID: "f9f2f41d-8c30-404e-846b-a2a041621fd9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666390 4878 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666434 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kgdq\" (UniqueName: \"kubernetes.io/projected/f9f2f41d-8c30-404e-846b-a2a041621fd9-kube-api-access-6kgdq\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666454 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666466 4878 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666480 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.666496 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9f2f41d-8c30-404e-846b-a2a041621fd9-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.819022 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4k8h9" event={"ID":"f9f2f41d-8c30-404e-846b-a2a041621fd9","Type":"ContainerDied","Data":"baad344afdb7b6e07070b93a7a2d8c0dfd78c7738e2f805734aa2910e5b98a87"} Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.819087 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baad344afdb7b6e07070b93a7a2d8c0dfd78c7738e2f805734aa2910e5b98a87" Dec 04 15:56:14 crc kubenswrapper[4878]: I1204 15:56:14.819438 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4k8h9" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.494721 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4k8h9"] Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.505550 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4k8h9"] Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.589575 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-rgck8"] Dec 04 15:56:15 crc kubenswrapper[4878]: E1204 15:56:15.590082 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9f2f41d-8c30-404e-846b-a2a041621fd9" containerName="keystone-bootstrap" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.590106 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9f2f41d-8c30-404e-846b-a2a041621fd9" containerName="keystone-bootstrap" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.590364 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9f2f41d-8c30-404e-846b-a2a041621fd9" containerName="keystone-bootstrap" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.591121 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.594309 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.596956 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.597111 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l74bm" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.597173 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.597229 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.615816 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rgck8"] Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698218 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrcq\" (UniqueName: \"kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698455 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698536 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698579 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698607 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.698634 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801075 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801182 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801213 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801236 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801440 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.801490 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrcq\" (UniqueName: \"kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.809409 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.809607 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.810924 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.822619 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.823632 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.823763 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrcq\" (UniqueName: \"kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq\") pod \"keystone-bootstrap-rgck8\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:15 crc kubenswrapper[4878]: I1204 15:56:15.917072 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:17 crc kubenswrapper[4878]: I1204 15:56:17.192334 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9f2f41d-8c30-404e-846b-a2a041621fd9" path="/var/lib/kubelet/pods/f9f2f41d-8c30-404e-846b-a2a041621fd9/volumes" Dec 04 15:56:23 crc kubenswrapper[4878]: I1204 15:56:23.604037 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.605999 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.607234 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:56:28 crc kubenswrapper[4878]: E1204 15:56:28.681352 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Dec 04 15:56:28 crc kubenswrapper[4878]: E1204 15:56:28.681608 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jpkkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-874tf_openstack(9e69f1bb-0019-4fee-b04b-d4e6319c61db): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:56:28 crc kubenswrapper[4878]: E1204 15:56:28.682785 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-874tf" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.804464 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.811916 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.883505 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884499 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884594 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884651 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884703 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884730 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.884757 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888232 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888354 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888466 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888755 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptvgk\" (UniqueName: \"kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888794 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.888881 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb\") pod \"84ae3851-e9c9-4643-97c2-937ad6b572f9\" (UID: \"84ae3851-e9c9-4643-97c2-937ad6b572f9\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.889177 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.889395 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f2fd\" (UniqueName: \"kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd\") pod \"ab5ba953-d18d-4990-85a5-1b40492af0c4\" (UID: \"ab5ba953-d18d-4990-85a5-1b40492af0c4\") " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.890573 4878 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.891711 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs" (OuterVolumeSpecName: "logs") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.893726 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts" (OuterVolumeSpecName: "scripts") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.895906 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd" (OuterVolumeSpecName: "kube-api-access-9f2fd") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "kube-api-access-9f2fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.900539 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.935983 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk" (OuterVolumeSpecName: "kube-api-access-ptvgk") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "kube-api-access-ptvgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.947677 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.951066 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.953106 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config" (OuterVolumeSpecName: "config") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.954644 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data" (OuterVolumeSpecName: "config-data") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.962006 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ab5ba953-d18d-4990-85a5-1b40492af0c4" (UID: "ab5ba953-d18d-4990-85a5-1b40492af0c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.962099 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.969303 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.970184 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" event={"ID":"84ae3851-e9c9-4643-97c2-937ad6b572f9","Type":"ContainerDied","Data":"7ed14bbe98c6f243dcfa202b9e34fa2d8cd34e142ad9bc7a121d782350f1c3d1"} Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.970258 4878 scope.go:117] "RemoveContainer" containerID="edb078c752fb42df809c9ee9c06ee6e96e28dc9834f8c46bc5d5c2c457810b17" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.975601 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.976921 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ab5ba953-d18d-4990-85a5-1b40492af0c4","Type":"ContainerDied","Data":"eb2ee95eaa40009719211d015692dccb43319ac19eb51a621c1b03bef64d1f1b"} Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.977007 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:56:28 crc kubenswrapper[4878]: E1204 15:56:28.979082 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-874tf" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993885 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993932 4878 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993950 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993964 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ab5ba953-d18d-4990-85a5-1b40492af0c4-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993976 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.993991 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994003 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994032 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994054 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ba953-d18d-4990-85a5-1b40492af0c4-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994076 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptvgk\" (UniqueName: \"kubernetes.io/projected/84ae3851-e9c9-4643-97c2-937ad6b572f9-kube-api-access-ptvgk\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994091 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.994104 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f2fd\" (UniqueName: \"kubernetes.io/projected/ab5ba953-d18d-4990-85a5-1b40492af0c4-kube-api-access-9f2fd\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:28 crc kubenswrapper[4878]: I1204 15:56:28.996726 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84ae3851-e9c9-4643-97c2-937ad6b572f9" (UID: "84ae3851-e9c9-4643-97c2-937ad6b572f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.019934 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.097287 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84ae3851-e9c9-4643-97c2-937ad6b572f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.097320 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.103408 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.115003 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.147299 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.147913 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-httpd" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.147937 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-httpd" Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.147965 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="init" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.147976 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="init" Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.147992 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-log" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.147999 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-log" Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.148017 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.148024 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.148225 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-log" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.148247 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" containerName="glance-httpd" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.148260 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.149443 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.152578 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.152657 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.196447 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5ba953-d18d-4990-85a5-1b40492af0c4" path="/var/lib/kubelet/pods/ab5ba953-d18d-4990-85a5-1b40492af0c4/volumes" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.197373 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.302019 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303539 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303630 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303667 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303730 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303761 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303805 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303844 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.303939 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.314259 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-5h7ng"] Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.403583 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 04 15:56:29 crc kubenswrapper[4878]: E1204 15:56:29.403810 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n89h666h565hd4h576h576hb4h99h689h556h5b4hddh67bh699hffhd9hf8h74h677hc9hf7h664h7ch567h7bh564h555h684h5d6hd4h95h5d8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkrtc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a9f20b46-41f4-4a66-a21c-d187f50fe664): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407252 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407304 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407403 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407431 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407500 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407540 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407631 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407711 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.407960 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.408097 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.408541 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.413171 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.413221 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.417649 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.424957 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.427461 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.429182 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.438746 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.483485 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.619382 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.619560 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.619606 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.619720 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.619948 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.620004 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.620041 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.620089 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs2fx\" (UniqueName: \"kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx\") pod \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\" (UID: \"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3\") " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.620105 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.620825 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs" (OuterVolumeSpecName: "logs") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.621043 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.621074 4878 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.627046 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.627576 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx" (OuterVolumeSpecName: "kube-api-access-gs2fx") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "kube-api-access-gs2fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.631119 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts" (OuterVolumeSpecName: "scripts") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.659353 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.680865 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data" (OuterVolumeSpecName: "config-data") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.682711 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" (UID: "4fd4c7e4-ea3d-4d93-853b-a575b29d06c3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723606 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723664 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723717 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723732 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723748 4878 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.723761 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs2fx\" (UniqueName: \"kubernetes.io/projected/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3-kube-api-access-gs2fx\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.750460 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.825620 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.996235 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4fd4c7e4-ea3d-4d93-853b-a575b29d06c3","Type":"ContainerDied","Data":"e4b62c6306809511578de24d96740a78fe98503a55d6f29da5f64b4c0e1ed13a"} Dec 04 15:56:29 crc kubenswrapper[4878]: I1204 15:56:29.996388 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.046845 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.061482 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.077117 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:30 crc kubenswrapper[4878]: E1204 15:56:30.077839 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-httpd" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.077860 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-httpd" Dec 04 15:56:30 crc kubenswrapper[4878]: E1204 15:56:30.077903 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-log" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.077921 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-log" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.078141 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-log" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.078156 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" containerName="glance-httpd" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.079287 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.086380 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.086744 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.135218 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.231813 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltk79\" (UniqueName: \"kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.231918 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.231972 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.232060 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.232085 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.232199 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.232402 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.232530 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335424 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335613 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltk79\" (UniqueName: \"kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335659 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335705 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335795 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335818 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335853 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.335960 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.336212 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.336312 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.336672 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.343504 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.351958 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.353347 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.359938 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltk79\" (UniqueName: \"kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.361030 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.380991 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.427983 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.841650 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:56:30 crc kubenswrapper[4878]: I1204 15:56:30.841734 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:56:30 crc kubenswrapper[4878]: E1204 15:56:30.914247 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 04 15:56:30 crc kubenswrapper[4878]: E1204 15:56:30.914580 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rmv4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-sljcs_openstack(b7b4a412-5105-473d-8037-1b43c331046b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:56:30 crc kubenswrapper[4878]: E1204 15:56:30.915807 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-sljcs" podUID="b7b4a412-5105-473d-8037-1b43c331046b" Dec 04 15:56:31 crc kubenswrapper[4878]: E1204 15:56:31.011798 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-sljcs" podUID="b7b4a412-5105-473d-8037-1b43c331046b" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.192957 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fd4c7e4-ea3d-4d93-853b-a575b29d06c3" path="/var/lib/kubelet/pods/4fd4c7e4-ea3d-4d93-853b-a575b29d06c3/volumes" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.194092 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" path="/var/lib/kubelet/pods/84ae3851-e9c9-4643-97c2-937ad6b572f9/volumes" Dec 04 15:56:31 crc kubenswrapper[4878]: E1204 15:56:31.371112 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 04 15:56:31 crc kubenswrapper[4878]: E1204 15:56:31.371409 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v8v4k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-vm2hn_openstack(d7a20413-55ed-48d6-98c3-0bd98368deaa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 15:56:31 crc kubenswrapper[4878]: E1204 15:56:31.372587 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-vm2hn" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.434026 4878 scope.go:117] "RemoveContainer" containerID="9d14b5a5428368e5015ab72e4653b365bb5ce923fd21e5a0f5fb49319f7ee616" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.819614 4878 scope.go:117] "RemoveContainer" containerID="369f4c660183972a4fa5ef34658c25ec1fe7ca6351d43507c5acf31bd44022d5" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.874351 4878 scope.go:117] "RemoveContainer" containerID="b004ea6ea1dd63a395fa369b8d6da91815c46d05154be673d35e8c97f31c3c01" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.903248 4878 scope.go:117] "RemoveContainer" containerID="88711bc78c2b316630d1605a419fe9bbd39dabbb000bedfaf7fd9e7d499528ec" Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.928638 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c56cbf696-wj6zc"] Dec 04 15:56:31 crc kubenswrapper[4878]: I1204 15:56:31.951631 4878 scope.go:117] "RemoveContainer" containerID="2784a347257a95161eb37034bc6dea5798d9f1a15224865cae57aaeb278a4756" Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.027479 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c56cbf696-wj6zc" event={"ID":"63307580-b46f-421f-bbf5-52eafde58f6c","Type":"ContainerStarted","Data":"446aecee8ecd1569cd1315751f5528cc9aa343f36089fa65f0958ca284fb3732"} Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.029326 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerStarted","Data":"e5524de2dbe51dd2fd12c3179bf60588e2edf8c7b06e50ceefcfd5ea9e56503c"} Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.033806 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerStarted","Data":"d7e38a4cad67fb1b7471e37b8a032f1c4f730d828e0290637a2b896ed0265b75"} Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.036771 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerStarted","Data":"5f38f6dd6d1f724a8516d792d7504d592c5aaad4e3c96b25e799e8f8f629ac5f"} Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.040914 4878 generic.go:334] "Generic (PLEG): container finished" podID="67596249-6134-4ecd-8c9f-865a51c1cbfa" containerID="baec7959ee2d1c9532b754e896f320bdb3feb7e0859f858faea0917807e28192" exitCode=0 Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.042339 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pqmg" event={"ID":"67596249-6134-4ecd-8c9f-865a51c1cbfa","Type":"ContainerDied","Data":"baec7959ee2d1c9532b754e896f320bdb3feb7e0859f858faea0917807e28192"} Dec 04 15:56:32 crc kubenswrapper[4878]: E1204 15:56:32.043752 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-vm2hn" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.181814 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.191360 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-rgck8"] Dec 04 15:56:32 crc kubenswrapper[4878]: I1204 15:56:32.483319 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:56:32 crc kubenswrapper[4878]: W1204 15:56:32.563625 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod127be8ff_e2ac_4c65_8bff_fe878e5d8eb4.slice/crio-7b14bedb5ffbf1c5a15d93c7a5f9d3ea787e2c916f5907b14774520a7fb018d7 WatchSource:0}: Error finding container 7b14bedb5ffbf1c5a15d93c7a5f9d3ea787e2c916f5907b14774520a7fb018d7: Status 404 returned error can't find the container with id 7b14bedb5ffbf1c5a15d93c7a5f9d3ea787e2c916f5907b14774520a7fb018d7 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.073344 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerStarted","Data":"9aae95ff4d2584deea3440217247b1816f5d9f207d945925aaee6e2f034ecc75"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.073546 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-59844c56b5-mjb67" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon-log" containerID="cri-o://d7e38a4cad67fb1b7471e37b8a032f1c4f730d828e0290637a2b896ed0265b75" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.074223 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-59844c56b5-mjb67" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon" containerID="cri-o://9aae95ff4d2584deea3440217247b1816f5d9f207d945925aaee6e2f034ecc75" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.094282 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerStarted","Data":"d6c7718074d8c6c3b9561dd2d424fec9a6e0dc30879f362728eccf812a7b0afb"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.094576 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b9b4f5745-7mt44" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon-log" containerID="cri-o://5f38f6dd6d1f724a8516d792d7504d592c5aaad4e3c96b25e799e8f8f629ac5f" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.094691 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b9b4f5745-7mt44" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon" containerID="cri-o://d6c7718074d8c6c3b9561dd2d424fec9a6e0dc30879f362728eccf812a7b0afb" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.102837 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-59844c56b5-mjb67" podStartSLOduration=3.498851765 podStartE2EDuration="30.102796001s" podCreationTimestamp="2025-12-04 15:56:03 +0000 UTC" firstStartedPulling="2025-12-04 15:56:04.847738099 +0000 UTC m=+1208.810275065" lastFinishedPulling="2025-12-04 15:56:31.451682335 +0000 UTC m=+1235.414219301" observedRunningTime="2025-12-04 15:56:33.100046872 +0000 UTC m=+1237.062583838" watchObservedRunningTime="2025-12-04 15:56:33.102796001 +0000 UTC m=+1237.065332957" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.118274 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerStarted","Data":"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.118831 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerStarted","Data":"6bbd84b8748b8f9ab0e49273c9c6fb9f678344bf78162f3017aa76705be5260a"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.127402 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerStarted","Data":"7b14bedb5ffbf1c5a15d93c7a5f9d3ea787e2c916f5907b14774520a7fb018d7"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.135278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c56cbf696-wj6zc" event={"ID":"63307580-b46f-421f-bbf5-52eafde58f6c","Type":"ContainerStarted","Data":"9b1b59c3c3353175e3a94dc2d009c548124e0719de0927be20ea7113e6317819"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.152819 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7787668ff9-nlh6p" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon-log" containerID="cri-o://e5524de2dbe51dd2fd12c3179bf60588e2edf8c7b06e50ceefcfd5ea9e56503c" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.152958 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7787668ff9-nlh6p" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon" containerID="cri-o://f57182e0325a3f4cea993a5106fc4a8a9b4ed7fa173d00d6c16e1a672eb26fe6" gracePeriod=30 Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.152816 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerStarted","Data":"f57182e0325a3f4cea993a5106fc4a8a9b4ed7fa173d00d6c16e1a672eb26fe6"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.158344 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerStarted","Data":"81666dd97513550e197f3d3cd884225e74466ba406d78bd83f5a6a4c31eff49b"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.173053 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rgck8" event={"ID":"4b6c7cc6-40e3-44ff-bd1c-6741af643002","Type":"ContainerStarted","Data":"661ef92c5c15b4dd2c10eeb20887ba2b4131d6fb0233af1502706010b6e1e512"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.173122 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rgck8" event={"ID":"4b6c7cc6-40e3-44ff-bd1c-6741af643002","Type":"ContainerStarted","Data":"779a79038b78d590c9bfa1e85577c1dc17351fd78c22af7beb544d00f46da56f"} Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.179199 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b9b4f5745-7mt44" podStartSLOduration=4.209645879 podStartE2EDuration="34.179165669s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="2025-12-04 15:56:00.890297818 +0000 UTC m=+1204.852834774" lastFinishedPulling="2025-12-04 15:56:30.859817608 +0000 UTC m=+1234.822354564" observedRunningTime="2025-12-04 15:56:33.126367253 +0000 UTC m=+1237.088904219" watchObservedRunningTime="2025-12-04 15:56:33.179165669 +0000 UTC m=+1237.141702625" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.185595 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7787668ff9-nlh6p" podStartSLOduration=4.53023838 podStartE2EDuration="34.179864957s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="2025-12-04 15:56:01.801527055 +0000 UTC m=+1205.764064011" lastFinishedPulling="2025-12-04 15:56:31.451153642 +0000 UTC m=+1235.413690588" observedRunningTime="2025-12-04 15:56:33.173527878 +0000 UTC m=+1237.136064864" watchObservedRunningTime="2025-12-04 15:56:33.179864957 +0000 UTC m=+1237.142401913" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.197119 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-rgck8" podStartSLOduration=18.19709211 podStartE2EDuration="18.19709211s" podCreationTimestamp="2025-12-04 15:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:33.192432133 +0000 UTC m=+1237.154969089" watchObservedRunningTime="2025-12-04 15:56:33.19709211 +0000 UTC m=+1237.159629066" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.370996 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.607514 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-5h7ng" podUID="84ae3851-e9c9-4643-97c2-937ad6b572f9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: i/o timeout" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.664967 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.723760 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config\") pod \"67596249-6134-4ecd-8c9f-865a51c1cbfa\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.896161 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgzqw\" (UniqueName: \"kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw\") pod \"67596249-6134-4ecd-8c9f-865a51c1cbfa\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.896225 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle\") pod \"67596249-6134-4ecd-8c9f-865a51c1cbfa\" (UID: \"67596249-6134-4ecd-8c9f-865a51c1cbfa\") " Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.903142 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw" (OuterVolumeSpecName: "kube-api-access-hgzqw") pod "67596249-6134-4ecd-8c9f-865a51c1cbfa" (UID: "67596249-6134-4ecd-8c9f-865a51c1cbfa"). InnerVolumeSpecName "kube-api-access-hgzqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.911475 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config" (OuterVolumeSpecName: "config") pod "67596249-6134-4ecd-8c9f-865a51c1cbfa" (UID: "67596249-6134-4ecd-8c9f-865a51c1cbfa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:33 crc kubenswrapper[4878]: I1204 15:56:33.936431 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67596249-6134-4ecd-8c9f-865a51c1cbfa" (UID: "67596249-6134-4ecd-8c9f-865a51c1cbfa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:33.999804 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:33.999854 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgzqw\" (UniqueName: \"kubernetes.io/projected/67596249-6134-4ecd-8c9f-865a51c1cbfa-kube-api-access-hgzqw\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:33.999885 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67596249-6134-4ecd-8c9f-865a51c1cbfa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.114466 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.196696 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerStarted","Data":"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2"} Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.203398 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerStarted","Data":"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a"} Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.214750 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerStarted","Data":"de8c8c38f2bb3020841838cc4903af78a77ba8ba02d13aac66bbc3bc97d22c3f"} Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.230141 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-db576cdd4-fp9zg" podStartSLOduration=24.230118279 podStartE2EDuration="24.230118279s" podCreationTimestamp="2025-12-04 15:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:34.223860172 +0000 UTC m=+1238.186397128" watchObservedRunningTime="2025-12-04 15:56:34.230118279 +0000 UTC m=+1238.192655235" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.232815 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-4pqmg" event={"ID":"67596249-6134-4ecd-8c9f-865a51c1cbfa","Type":"ContainerDied","Data":"95de88d3d70995391b6f2a5069c93b8fdc22e99bf8f3ed10a645533d0ed921d2"} Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.232910 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95de88d3d70995391b6f2a5069c93b8fdc22e99bf8f3ed10a645533d0ed921d2" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.233118 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-4pqmg" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.260440 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c56cbf696-wj6zc" event={"ID":"63307580-b46f-421f-bbf5-52eafde58f6c","Type":"ContainerStarted","Data":"1832fa1305ab550739898f66fd01469ed7e6afa3ae77db5681b1d723490158de"} Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.307427 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c56cbf696-wj6zc" podStartSLOduration=24.307404921 podStartE2EDuration="24.307404921s" podCreationTimestamp="2025-12-04 15:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:34.289992173 +0000 UTC m=+1238.252529149" watchObservedRunningTime="2025-12-04 15:56:34.307404921 +0000 UTC m=+1238.269941877" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.392171 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:56:34 crc kubenswrapper[4878]: E1204 15:56:34.392728 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67596249-6134-4ecd-8c9f-865a51c1cbfa" containerName="neutron-db-sync" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.392743 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="67596249-6134-4ecd-8c9f-865a51c1cbfa" containerName="neutron-db-sync" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.392955 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="67596249-6134-4ecd-8c9f-865a51c1cbfa" containerName="neutron-db-sync" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.394111 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.461664 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.464374 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.471522 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.471895 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.472120 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.478349 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-jst5w" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.515882 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.515944 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.515971 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9qz9\" (UniqueName: \"kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516000 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516040 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516090 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqcf\" (UniqueName: \"kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516142 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516185 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516215 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516241 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.516264 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.526787 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.566427 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618160 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618233 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618267 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618291 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618315 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618339 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618363 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9qz9\" (UniqueName: \"kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618389 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618405 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618447 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqcf\" (UniqueName: \"kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.618491 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.619477 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.619946 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.620511 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.621168 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.621443 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.643141 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.644034 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.653678 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.654381 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.659489 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9qz9\" (UniqueName: \"kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9\") pod \"dnsmasq-dns-55f844cf75-wsxfk\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.664969 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqcf\" (UniqueName: \"kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf\") pod \"neutron-85684bb58-xxv4g\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.806620 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:34 crc kubenswrapper[4878]: I1204 15:56:34.857834 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:35 crc kubenswrapper[4878]: I1204 15:56:35.397463 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerStarted","Data":"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767"} Dec 04 15:56:35 crc kubenswrapper[4878]: I1204 15:56:35.397913 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerStarted","Data":"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04"} Dec 04 15:56:35 crc kubenswrapper[4878]: I1204 15:56:35.411938 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.411855613 podStartE2EDuration="5.411855613s" podCreationTimestamp="2025-12-04 15:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:35.374928416 +0000 UTC m=+1239.337465372" watchObservedRunningTime="2025-12-04 15:56:35.411855613 +0000 UTC m=+1239.374392569" Dec 04 15:56:35 crc kubenswrapper[4878]: I1204 15:56:35.796771 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:56:36 crc kubenswrapper[4878]: I1204 15:56:36.053063 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:56:36 crc kubenswrapper[4878]: I1204 15:56:36.341430 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerStarted","Data":"0eabb792d30dc4c25df13ebb4a9fc57bddd9b7f6c1c31179cbf5000380e21888"} Dec 04 15:56:36 crc kubenswrapper[4878]: I1204 15:56:36.344212 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" event={"ID":"b0d830a5-d873-4309-abfc-5354c3dfe4ef","Type":"ContainerStarted","Data":"718bf442d1f105860dd8692407712ada62ee6768da1e054380a7a27e15b24750"} Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.373591 4878 generic.go:334] "Generic (PLEG): container finished" podID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerID="633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3" exitCode=0 Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.376050 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" event={"ID":"b0d830a5-d873-4309-abfc-5354c3dfe4ef","Type":"ContainerDied","Data":"633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3"} Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.396273 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerStarted","Data":"7fbf0e0c50b3f86a7a7ebb6cc4ecaf7cdc4438812a86484491748d59f6b89d8b"} Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.414382 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7c7d556697-lmlhb"] Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.429033 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerStarted","Data":"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d"} Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.429435 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.435465 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.435733 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.450041 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7d556697-lmlhb"] Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.537520 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-combined-ca-bundle\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.538194 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-internal-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.538251 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.538323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfssf\" (UniqueName: \"kubernetes.io/projected/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-kube-api-access-cfssf\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.538373 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-ovndb-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.538487 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-httpd-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.539647 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-public-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.593053 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.593017744 podStartE2EDuration="8.593017744s" podCreationTimestamp="2025-12-04 15:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:37.482643681 +0000 UTC m=+1241.445180637" watchObservedRunningTime="2025-12-04 15:56:37.593017744 +0000 UTC m=+1241.555554700" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649272 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-internal-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649815 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649852 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfssf\" (UniqueName: \"kubernetes.io/projected/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-kube-api-access-cfssf\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649899 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-ovndb-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-httpd-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.649971 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-public-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.650025 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-combined-ca-bundle\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.676429 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.680660 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-combined-ca-bundle\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.681597 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-httpd-config\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.682379 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-ovndb-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.683558 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-public-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.692144 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-internal-tls-certs\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.692718 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfssf\" (UniqueName: \"kubernetes.io/projected/0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53-kube-api-access-cfssf\") pod \"neutron-7c7d556697-lmlhb\" (UID: \"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53\") " pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:37 crc kubenswrapper[4878]: I1204 15:56:37.947824 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:38 crc kubenswrapper[4878]: I1204 15:56:38.454424 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerStarted","Data":"c723caafaef6bec6a4253ec63948917b828ec31d3cb066393337d7aefd97f02e"} Dec 04 15:56:38 crc kubenswrapper[4878]: I1204 15:56:38.458501 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:56:38 crc kubenswrapper[4878]: I1204 15:56:38.490892 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85684bb58-xxv4g" podStartSLOduration=4.490844476 podStartE2EDuration="4.490844476s" podCreationTimestamp="2025-12-04 15:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:38.481201774 +0000 UTC m=+1242.443738720" watchObservedRunningTime="2025-12-04 15:56:38.490844476 +0000 UTC m=+1242.453381432" Dec 04 15:56:38 crc kubenswrapper[4878]: I1204 15:56:38.675488 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7c7d556697-lmlhb"] Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.484253 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.485189 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.491604 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7d556697-lmlhb" event={"ID":"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53","Type":"ContainerStarted","Data":"39440e23cf202664cabb3884d84891c47b349480a165f66b2ca206ca6cecf33c"} Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.491660 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7d556697-lmlhb" event={"ID":"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53","Type":"ContainerStarted","Data":"c501edfcd0fc248d89fc012882a31718a348c7c527efc21107c1ec51ae602fea"} Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.505448 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" event={"ID":"b0d830a5-d873-4309-abfc-5354c3dfe4ef","Type":"ContainerStarted","Data":"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d"} Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.505794 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.556304 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.602280 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 15:56:39 crc kubenswrapper[4878]: I1204 15:56:39.606357 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" podStartSLOduration=5.606327687 podStartE2EDuration="5.606327687s" podCreationTimestamp="2025-12-04 15:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:39.535373305 +0000 UTC m=+1243.497910281" watchObservedRunningTime="2025-12-04 15:56:39.606327687 +0000 UTC m=+1243.568864643" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.033349 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.243170 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.429101 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.429448 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.471471 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.483454 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.528607 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.529073 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.529092 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 15:56:40 crc kubenswrapper[4878]: I1204 15:56:40.529102 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:41 crc kubenswrapper[4878]: I1204 15:56:41.006817 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:41 crc kubenswrapper[4878]: I1204 15:56:41.007244 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:56:41 crc kubenswrapper[4878]: I1204 15:56:41.182180 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:41 crc kubenswrapper[4878]: I1204 15:56:41.182249 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:56:42 crc kubenswrapper[4878]: I1204 15:56:42.561028 4878 generic.go:334] "Generic (PLEG): container finished" podID="4b6c7cc6-40e3-44ff-bd1c-6741af643002" containerID="661ef92c5c15b4dd2c10eeb20887ba2b4131d6fb0233af1502706010b6e1e512" exitCode=0 Dec 04 15:56:42 crc kubenswrapper[4878]: I1204 15:56:42.561123 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rgck8" event={"ID":"4b6c7cc6-40e3-44ff-bd1c-6741af643002","Type":"ContainerDied","Data":"661ef92c5c15b4dd2c10eeb20887ba2b4131d6fb0233af1502706010b6e1e512"} Dec 04 15:56:43 crc kubenswrapper[4878]: I1204 15:56:43.529465 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:43 crc kubenswrapper[4878]: I1204 15:56:43.529566 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:56:43 crc kubenswrapper[4878]: I1204 15:56:43.531573 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 15:56:43 crc kubenswrapper[4878]: I1204 15:56:43.981807 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 15:56:43 crc kubenswrapper[4878]: I1204 15:56:43.985851 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 15:56:44 crc kubenswrapper[4878]: I1204 15:56:44.809090 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:56:44 crc kubenswrapper[4878]: I1204 15:56:44.903058 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:56:44 crc kubenswrapper[4878]: I1204 15:56:44.903328 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="dnsmasq-dns" containerID="cri-o://2f59f63b291dbc3fed045193b3b5b02b48a9fc4010c4b2251ddb2458db909754" gracePeriod=10 Dec 04 15:56:45 crc kubenswrapper[4878]: I1204 15:56:45.407347 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: connect: connection refused" Dec 04 15:56:45 crc kubenswrapper[4878]: I1204 15:56:45.640277 4878 generic.go:334] "Generic (PLEG): container finished" podID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerID="2f59f63b291dbc3fed045193b3b5b02b48a9fc4010c4b2251ddb2458db909754" exitCode=0 Dec 04 15:56:45 crc kubenswrapper[4878]: I1204 15:56:45.640334 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" event={"ID":"b71ceafd-10e6-4b24-8021-a62932b44acb","Type":"ContainerDied","Data":"2f59f63b291dbc3fed045193b3b5b02b48a9fc4010c4b2251ddb2458db909754"} Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.407326 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583128 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583270 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583414 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583606 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583725 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.583753 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdrcq\" (UniqueName: \"kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq\") pod \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\" (UID: \"4b6c7cc6-40e3-44ff-bd1c-6741af643002\") " Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.593406 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.595216 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq" (OuterVolumeSpecName: "kube-api-access-sdrcq") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "kube-api-access-sdrcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.605816 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts" (OuterVolumeSpecName: "scripts") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.632240 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.643648 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data" (OuterVolumeSpecName: "config-data") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.653010 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b6c7cc6-40e3-44ff-bd1c-6741af643002" (UID: "4b6c7cc6-40e3-44ff-bd1c-6741af643002"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.684682 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-rgck8" event={"ID":"4b6c7cc6-40e3-44ff-bd1c-6741af643002","Type":"ContainerDied","Data":"779a79038b78d590c9bfa1e85577c1dc17351fd78c22af7beb544d00f46da56f"} Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.685070 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="779a79038b78d590c9bfa1e85577c1dc17351fd78c22af7beb544d00f46da56f" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.685253 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-rgck8" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.689766 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.690355 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.690405 4878 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.690422 4878 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.690434 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdrcq\" (UniqueName: \"kubernetes.io/projected/4b6c7cc6-40e3-44ff-bd1c-6741af643002-kube-api-access-sdrcq\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.690444 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b6c7cc6-40e3-44ff-bd1c-6741af643002-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:47 crc kubenswrapper[4878]: I1204 15:56:47.969443 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.102443 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.103049 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.103098 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.103256 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.103472 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q5rz\" (UniqueName: \"kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.103501 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb\") pod \"b71ceafd-10e6-4b24-8021-a62932b44acb\" (UID: \"b71ceafd-10e6-4b24-8021-a62932b44acb\") " Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.118322 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz" (OuterVolumeSpecName: "kube-api-access-7q5rz") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "kube-api-access-7q5rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.185819 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config" (OuterVolumeSpecName: "config") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.197553 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.217631 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q5rz\" (UniqueName: \"kubernetes.io/projected/b71ceafd-10e6-4b24-8021-a62932b44acb-kube-api-access-7q5rz\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.217709 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.217723 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.240093 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.248791 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.263815 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b71ceafd-10e6-4b24-8021-a62932b44acb" (UID: "b71ceafd-10e6-4b24-8021-a62932b44acb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.320090 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.320142 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.320158 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b71ceafd-10e6-4b24-8021-a62932b44acb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.587919 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-96bf8d55-s7dcq"] Dec 04 15:56:48 crc kubenswrapper[4878]: E1204 15:56:48.588477 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="init" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.588496 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="init" Dec 04 15:56:48 crc kubenswrapper[4878]: E1204 15:56:48.588515 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b6c7cc6-40e3-44ff-bd1c-6741af643002" containerName="keystone-bootstrap" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.588522 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b6c7cc6-40e3-44ff-bd1c-6741af643002" containerName="keystone-bootstrap" Dec 04 15:56:48 crc kubenswrapper[4878]: E1204 15:56:48.588565 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="dnsmasq-dns" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.588573 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="dnsmasq-dns" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.588760 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" containerName="dnsmasq-dns" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.588781 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b6c7cc6-40e3-44ff-bd1c-6741af643002" containerName="keystone-bootstrap" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.599042 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.602799 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.603354 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.603546 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-l74bm" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.606452 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-scripts\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.606677 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-credential-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.606749 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-internal-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.606784 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r28h8\" (UniqueName: \"kubernetes.io/projected/59f69e03-b3e6-49bf-9b26-e10703659609-kube-api-access-r28h8\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.606848 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-public-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.607016 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-combined-ca-bundle\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.607079 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-fernet-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.607223 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-config-data\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.610794 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.610986 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.611186 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.626964 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-96bf8d55-s7dcq"] Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710719 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-public-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710819 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-combined-ca-bundle\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710847 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-fernet-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710863 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-config-data\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710933 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-scripts\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.710995 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-credential-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.711018 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-internal-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.711053 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r28h8\" (UniqueName: \"kubernetes.io/projected/59f69e03-b3e6-49bf-9b26-e10703659609-kube-api-access-r28h8\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.726593 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-credential-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.727265 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-config-data\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.727325 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-public-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.728843 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-fernet-keys\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.743037 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-combined-ca-bundle\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.743314 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-scripts\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.750648 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/59f69e03-b3e6-49bf-9b26-e10703659609-internal-tls-certs\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.750965 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7c7d556697-lmlhb" event={"ID":"0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53","Type":"ContainerStarted","Data":"0dac8cd5b292dfd745603f0cb0653b36cbebe17bcfe76b6131736c5992d979bb"} Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.751330 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.762839 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r28h8\" (UniqueName: \"kubernetes.io/projected/59f69e03-b3e6-49bf-9b26-e10703659609-kube-api-access-r28h8\") pod \"keystone-96bf8d55-s7dcq\" (UID: \"59f69e03-b3e6-49bf-9b26-e10703659609\") " pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.769719 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vm2hn" event={"ID":"d7a20413-55ed-48d6-98c3-0bd98368deaa","Type":"ContainerStarted","Data":"1e35fa8a6f48b3ccfe8d9eb2116c66b67e963bcce7b0620b9fbb3188ca2b408b"} Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.775739 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" event={"ID":"b71ceafd-10e6-4b24-8021-a62932b44acb","Type":"ContainerDied","Data":"df7902fa2041a42c6ec4eb5b55eb3bf81ab8fe67bb99c5a45b9071301ec5f7e0"} Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.775798 4878 scope.go:117] "RemoveContainer" containerID="2f59f63b291dbc3fed045193b3b5b02b48a9fc4010c4b2251ddb2458db909754" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.776089 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-2m666" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.809472 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7c7d556697-lmlhb" podStartSLOduration=11.809446777 podStartE2EDuration="11.809446777s" podCreationTimestamp="2025-12-04 15:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:48.784514251 +0000 UTC m=+1252.747051217" watchObservedRunningTime="2025-12-04 15:56:48.809446777 +0000 UTC m=+1252.771983733" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.820218 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-vm2hn" podStartSLOduration=2.907207671 podStartE2EDuration="49.820192837s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="2025-12-04 15:56:01.145085287 +0000 UTC m=+1205.107622243" lastFinishedPulling="2025-12-04 15:56:48.058070453 +0000 UTC m=+1252.020607409" observedRunningTime="2025-12-04 15:56:48.818566787 +0000 UTC m=+1252.781103743" watchObservedRunningTime="2025-12-04 15:56:48.820192837 +0000 UTC m=+1252.782729793" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.956393 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.960382 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.962391 4878 scope.go:117] "RemoveContainer" containerID="2557c90b407cf8b6842d50e669c2a0d817cb46b0f22b127ff8704e777f2b29e1" Dec 04 15:56:48 crc kubenswrapper[4878]: I1204 15:56:48.988857 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-2m666"] Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.233086 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b71ceafd-10e6-4b24-8021-a62932b44acb" path="/var/lib/kubelet/pods/b71ceafd-10e6-4b24-8021-a62932b44acb/volumes" Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.411615 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-96bf8d55-s7dcq"] Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.806144 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-96bf8d55-s7dcq" event={"ID":"59f69e03-b3e6-49bf-9b26-e10703659609","Type":"ContainerStarted","Data":"94b2b6e72c949bfd6c5e3d06d0da650f6fa71a82a35b12a8981573bf15995ef6"} Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.806597 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-96bf8d55-s7dcq" event={"ID":"59f69e03-b3e6-49bf-9b26-e10703659609","Type":"ContainerStarted","Data":"a662f148ecaaacc20c0b91d547acd3c4b14402643d66169277f0be1b876a9ce7"} Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.807035 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.826556 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerStarted","Data":"e12662a6c3296e2b53d8ef791a69694b77a984f7444af44a0cbf2bf0dd62837a"} Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.831057 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-874tf" event={"ID":"9e69f1bb-0019-4fee-b04b-d4e6319c61db","Type":"ContainerStarted","Data":"b8fe6232990f712f6fc840b67824c706e9c9c437243418f8234d000631b44808"} Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.831254 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-96bf8d55-s7dcq" podStartSLOduration=1.831239754 podStartE2EDuration="1.831239754s" podCreationTimestamp="2025-12-04 15:56:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:56:49.82592322 +0000 UTC m=+1253.788460176" watchObservedRunningTime="2025-12-04 15:56:49.831239754 +0000 UTC m=+1253.793776710" Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.853605 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sljcs" event={"ID":"b7b4a412-5105-473d-8037-1b43c331046b","Type":"ContainerStarted","Data":"d876f9782754cc3642fdb93d37a1d61635e48f65a76f6de526988b8bdbfabea7"} Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.870668 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-874tf" podStartSLOduration=4.965989201 podStartE2EDuration="50.870641594s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="2025-12-04 15:56:02.145727837 +0000 UTC m=+1206.108264793" lastFinishedPulling="2025-12-04 15:56:48.05038023 +0000 UTC m=+1252.012917186" observedRunningTime="2025-12-04 15:56:49.856095648 +0000 UTC m=+1253.818632604" watchObservedRunningTime="2025-12-04 15:56:49.870641594 +0000 UTC m=+1253.833178550" Dec 04 15:56:49 crc kubenswrapper[4878]: I1204 15:56:49.898984 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-sljcs" podStartSLOduration=3.801162582 podStartE2EDuration="50.898954595s" podCreationTimestamp="2025-12-04 15:55:59 +0000 UTC" firstStartedPulling="2025-12-04 15:56:00.940596646 +0000 UTC m=+1204.903133602" lastFinishedPulling="2025-12-04 15:56:48.038388659 +0000 UTC m=+1252.000925615" observedRunningTime="2025-12-04 15:56:49.878731137 +0000 UTC m=+1253.841268103" watchObservedRunningTime="2025-12-04 15:56:49.898954595 +0000 UTC m=+1253.861491551" Dec 04 15:56:51 crc kubenswrapper[4878]: I1204 15:56:51.011176 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 04 15:56:51 crc kubenswrapper[4878]: I1204 15:56:51.181859 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6c56cbf696-wj6zc" podUID="63307580-b46f-421f-bbf5-52eafde58f6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 04 15:56:52 crc kubenswrapper[4878]: I1204 15:56:52.892559 4878 generic.go:334] "Generic (PLEG): container finished" podID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" containerID="b8fe6232990f712f6fc840b67824c706e9c9c437243418f8234d000631b44808" exitCode=0 Dec 04 15:56:52 crc kubenswrapper[4878]: I1204 15:56:52.892675 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-874tf" event={"ID":"9e69f1bb-0019-4fee-b04b-d4e6319c61db","Type":"ContainerDied","Data":"b8fe6232990f712f6fc840b67824c706e9c9c437243418f8234d000631b44808"} Dec 04 15:56:52 crc kubenswrapper[4878]: I1204 15:56:52.900930 4878 generic.go:334] "Generic (PLEG): container finished" podID="d7a20413-55ed-48d6-98c3-0bd98368deaa" containerID="1e35fa8a6f48b3ccfe8d9eb2116c66b67e963bcce7b0620b9fbb3188ca2b408b" exitCode=0 Dec 04 15:56:52 crc kubenswrapper[4878]: I1204 15:56:52.900998 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vm2hn" event={"ID":"d7a20413-55ed-48d6-98c3-0bd98368deaa","Type":"ContainerDied","Data":"1e35fa8a6f48b3ccfe8d9eb2116c66b67e963bcce7b0620b9fbb3188ca2b408b"} Dec 04 15:56:53 crc kubenswrapper[4878]: I1204 15:56:53.915886 4878 generic.go:334] "Generic (PLEG): container finished" podID="b7b4a412-5105-473d-8037-1b43c331046b" containerID="d876f9782754cc3642fdb93d37a1d61635e48f65a76f6de526988b8bdbfabea7" exitCode=0 Dec 04 15:56:53 crc kubenswrapper[4878]: I1204 15:56:53.916147 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sljcs" event={"ID":"b7b4a412-5105-473d-8037-1b43c331046b","Type":"ContainerDied","Data":"d876f9782754cc3642fdb93d37a1d61635e48f65a76f6de526988b8bdbfabea7"} Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.670158 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.835358 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8v4k\" (UniqueName: \"kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k\") pod \"d7a20413-55ed-48d6-98c3-0bd98368deaa\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.835461 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data\") pod \"d7a20413-55ed-48d6-98c3-0bd98368deaa\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.835562 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle\") pod \"d7a20413-55ed-48d6-98c3-0bd98368deaa\" (UID: \"d7a20413-55ed-48d6-98c3-0bd98368deaa\") " Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.859125 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d7a20413-55ed-48d6-98c3-0bd98368deaa" (UID: "d7a20413-55ed-48d6-98c3-0bd98368deaa"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.861189 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k" (OuterVolumeSpecName: "kube-api-access-v8v4k") pod "d7a20413-55ed-48d6-98c3-0bd98368deaa" (UID: "d7a20413-55ed-48d6-98c3-0bd98368deaa"). InnerVolumeSpecName "kube-api-access-v8v4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.879007 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7a20413-55ed-48d6-98c3-0bd98368deaa" (UID: "d7a20413-55ed-48d6-98c3-0bd98368deaa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.938590 4878 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.938644 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7a20413-55ed-48d6-98c3-0bd98368deaa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.938658 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8v4k\" (UniqueName: \"kubernetes.io/projected/d7a20413-55ed-48d6-98c3-0bd98368deaa-kube-api-access-v8v4k\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.994568 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-vm2hn" event={"ID":"d7a20413-55ed-48d6-98c3-0bd98368deaa","Type":"ContainerDied","Data":"7891db646ed00c497fca6b9a3e7b8d80be2b544210ba562bc27e448792a9c8ca"} Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.994603 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-vm2hn" Dec 04 15:56:57 crc kubenswrapper[4878]: I1204 15:56:57.994620 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7891db646ed00c497fca6b9a3e7b8d80be2b544210ba562bc27e448792a9c8ca" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.030958 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5958c7964f-4fxmd"] Dec 04 15:56:59 crc kubenswrapper[4878]: E1204 15:56:59.036992 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" containerName="barbican-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.037040 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" containerName="barbican-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.037330 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" containerName="barbican-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.039051 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.044232 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.044513 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-5zfv7" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.048038 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.062272 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-c7d97c576-6crcc"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.064472 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.073297 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.157259 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5958c7964f-4fxmd"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.178564 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c7d97c576-6crcc"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.201520 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-combined-ca-bundle\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.201668 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpzw9\" (UniqueName: \"kubernetes.io/projected/9a85aaed-250a-44a2-aa46-3ca586b53e2b-kube-api-access-zpzw9\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.201788 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data-custom\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.201826 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-logs\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202025 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a85aaed-250a-44a2-aa46-3ca586b53e2b-logs\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202176 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8v2\" (UniqueName: \"kubernetes.io/projected/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-kube-api-access-pn8v2\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202282 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202378 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-combined-ca-bundle\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202434 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.202465 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data-custom\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.274153 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.279650 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.309725 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpzw9\" (UniqueName: \"kubernetes.io/projected/9a85aaed-250a-44a2-aa46-3ca586b53e2b-kube-api-access-zpzw9\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.309790 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.309821 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312018 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312098 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data-custom\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312175 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-logs\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312246 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312325 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a85aaed-250a-44a2-aa46-3ca586b53e2b-logs\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312379 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn8v2\" (UniqueName: \"kubernetes.io/projected/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-kube-api-access-pn8v2\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312414 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312448 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312488 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcntr\" (UniqueName: \"kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312535 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312564 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-combined-ca-bundle\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312590 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data-custom\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.312651 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-combined-ca-bundle\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.313173 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a85aaed-250a-44a2-aa46-3ca586b53e2b-logs\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.335519 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-logs\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.346924 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.353393 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-config-data-custom\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.361044 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.362328 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-config-data-custom\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.362525 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpzw9\" (UniqueName: \"kubernetes.io/projected/9a85aaed-250a-44a2-aa46-3ca586b53e2b-kube-api-access-zpzw9\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.369585 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-combined-ca-bundle\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.372111 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.377307 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn8v2\" (UniqueName: \"kubernetes.io/projected/a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0-kube-api-access-pn8v2\") pod \"barbican-worker-5958c7964f-4fxmd\" (UID: \"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0\") " pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.387433 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5958c7964f-4fxmd" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.390379 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a85aaed-250a-44a2-aa46-3ca586b53e2b-combined-ca-bundle\") pod \"barbican-keystone-listener-c7d97c576-6crcc\" (UID: \"9a85aaed-250a-44a2-aa46-3ca586b53e2b\") " pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.395125 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417364 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417419 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417459 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417580 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417664 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.417699 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcntr\" (UniqueName: \"kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.419586 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.420497 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.421142 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.421613 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.423452 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.465322 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcntr\" (UniqueName: \"kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr\") pod \"dnsmasq-dns-85ff748b95-cm9wt\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.472835 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-874tf" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.480377 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:56:59 crc kubenswrapper[4878]: E1204 15:56:59.481073 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" containerName="placement-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.481101 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" containerName="placement-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.481479 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" containerName="placement-db-sync" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.482688 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.486469 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.498983 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sljcs" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.515682 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.527394 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.623181 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.625601 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data\") pod \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.625844 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.626167 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs\") pod \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.626405 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle\") pod \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.626579 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpkkp\" (UniqueName: \"kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp\") pod \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.626829 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.626981 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rmv4\" (UniqueName: \"kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.627404 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts\") pod \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\" (UID: \"9e69f1bb-0019-4fee-b04b-d4e6319c61db\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.627496 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.627566 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle\") pod \"b7b4a412-5105-473d-8037-1b43c331046b\" (UID: \"b7b4a412-5105-473d-8037-1b43c331046b\") " Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.628238 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.628578 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdfh\" (UniqueName: \"kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.628663 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.628738 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.628932 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.631177 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs" (OuterVolumeSpecName: "logs") pod "9e69f1bb-0019-4fee-b04b-d4e6319c61db" (UID: "9e69f1bb-0019-4fee-b04b-d4e6319c61db"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.633076 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.635779 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts" (OuterVolumeSpecName: "scripts") pod "9e69f1bb-0019-4fee-b04b-d4e6319c61db" (UID: "9e69f1bb-0019-4fee-b04b-d4e6319c61db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.651403 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.653102 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts" (OuterVolumeSpecName: "scripts") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.655898 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4" (OuterVolumeSpecName: "kube-api-access-8rmv4") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "kube-api-access-8rmv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.657659 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp" (OuterVolumeSpecName: "kube-api-access-jpkkp") pod "9e69f1bb-0019-4fee-b04b-d4e6319c61db" (UID: "9e69f1bb-0019-4fee-b04b-d4e6319c61db"). InnerVolumeSpecName "kube-api-access-jpkkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.714604 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data" (OuterVolumeSpecName: "config-data") pod "9e69f1bb-0019-4fee-b04b-d4e6319c61db" (UID: "9e69f1bb-0019-4fee-b04b-d4e6319c61db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.731212 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdfh\" (UniqueName: \"kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.737099 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.738394 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.739006 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.739317 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.739693 4878 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b7b4a412-5105-473d-8037-1b43c331046b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.739811 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rmv4\" (UniqueName: \"kubernetes.io/projected/b7b4a412-5105-473d-8037-1b43c331046b-kube-api-access-8rmv4\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740064 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740192 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740318 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740434 4878 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740550 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e69f1bb-0019-4fee-b04b-d4e6319c61db-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.740658 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpkkp\" (UniqueName: \"kubernetes.io/projected/9e69f1bb-0019-4fee-b04b-d4e6319c61db-kube-api-access-jpkkp\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.741178 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.770775 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.771440 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.771924 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.787606 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdfh\" (UniqueName: \"kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh\") pod \"barbican-api-75f5766fbb-8lpst\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.787770 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.790006 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e69f1bb-0019-4fee-b04b-d4e6319c61db" (UID: "9e69f1bb-0019-4fee-b04b-d4e6319c61db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.842546 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e69f1bb-0019-4fee-b04b-d4e6319c61db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.842607 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.844108 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data" (OuterVolumeSpecName: "config-data") pod "b7b4a412-5105-473d-8037-1b43c331046b" (UID: "b7b4a412-5105-473d-8037-1b43c331046b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.864621 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:56:59 crc kubenswrapper[4878]: I1204 15:56:59.946937 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b4a412-5105-473d-8037-1b43c331046b-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.030597 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5958c7964f-4fxmd"] Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.081504 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-sljcs" event={"ID":"b7b4a412-5105-473d-8037-1b43c331046b","Type":"ContainerDied","Data":"6433df386b188f72439f0f42b6dff7a8d31309a6c646800a05b6ae14d2c38918"} Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.081575 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6433df386b188f72439f0f42b6dff7a8d31309a6c646800a05b6ae14d2c38918" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.081777 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-sljcs" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.098510 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-874tf" event={"ID":"9e69f1bb-0019-4fee-b04b-d4e6319c61db","Type":"ContainerDied","Data":"e0e360bb81eca7a5259c7768f204db3be16be2e27bd5651a8daec36a5cdfd976"} Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.098570 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e360bb81eca7a5259c7768f204db3be16be2e27bd5651a8daec36a5cdfd976" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.098667 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-874tf" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.764320 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-c7d97c576-6crcc"] Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.783490 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c5668dccb-gv79r"] Dec 04 15:57:00 crc kubenswrapper[4878]: E1204 15:57:00.786978 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b4a412-5105-473d-8037-1b43c331046b" containerName="cinder-db-sync" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.787008 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b4a412-5105-473d-8037-1b43c331046b" containerName="cinder-db-sync" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.787260 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b4a412-5105-473d-8037-1b43c331046b" containerName="cinder-db-sync" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.788490 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798539 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-config-data\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798611 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-combined-ca-bundle\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798642 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-public-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798723 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460bb923-1a77-4759-98cb-b6262047cc27-logs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798755 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798818 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-internal-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798854 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx8kg\" (UniqueName: \"kubernetes.io/projected/460bb923-1a77-4759-98cb-b6262047cc27-kube-api-access-vx8kg\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.798903 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-scripts\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.804784 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.804984 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.805040 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.808347 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-ftlgh" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.842294 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5668dccb-gv79r"] Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.854937 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.855005 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.855064 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.856151 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.856226 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a" gracePeriod=600 Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901476 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-config-data\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901572 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-combined-ca-bundle\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901598 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-public-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901711 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460bb923-1a77-4759-98cb-b6262047cc27-logs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901817 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-internal-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901862 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx8kg\" (UniqueName: \"kubernetes.io/projected/460bb923-1a77-4759-98cb-b6262047cc27-kube-api-access-vx8kg\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.901917 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-scripts\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.903700 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/460bb923-1a77-4759-98cb-b6262047cc27-logs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.926047 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-public-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.934073 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-combined-ca-bundle\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.950774 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-internal-tls-certs\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.951739 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-config-data\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.958169 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/460bb923-1a77-4759-98cb-b6262047cc27-scripts\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:00 crc kubenswrapper[4878]: I1204 15:57:00.966777 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx8kg\" (UniqueName: \"kubernetes.io/projected/460bb923-1a77-4759-98cb-b6262047cc27-kube-api-access-vx8kg\") pod \"placement-c5668dccb-gv79r\" (UID: \"460bb923-1a77-4759-98cb-b6262047cc27\") " pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.012467 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.110573 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.112610 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.130828 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-kqkbb" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.133173 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.133608 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.135990 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5958c7964f-4fxmd" event={"ID":"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0","Type":"ContainerStarted","Data":"bc9162c50f58f91bc45b22174b1bb44ff21fcd2c2e21cb34aff77c4351fd0256"} Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.142170 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.148409 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" event={"ID":"9a85aaed-250a-44a2-aa46-3ca586b53e2b","Type":"ContainerStarted","Data":"7a8c3a1f6eb181cef67ebe1e88995c67969f35188c139eb1132560f0a7ea3a9e"} Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.189036 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.189283 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.193992 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6c56cbf696-wj6zc" podUID="63307580-b46f-421f-bbf5-52eafde58f6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.220450 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243281 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlz74\" (UniqueName: \"kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243356 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243418 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243521 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243656 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.243801 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: E1204 15:57:01.256535 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.320404 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.332065 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.334232 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.351390 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.351465 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.351514 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.351591 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlz74\" (UniqueName: \"kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.351843 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.352140 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.352500 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.362101 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.373440 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.376636 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.378484 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.432233 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlz74\" (UniqueName: \"kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74\") pod \"cinder-scheduler-0\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455195 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455279 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6clh4\" (UniqueName: \"kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455336 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455364 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455392 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.455452 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.480271 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.480999 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.561471 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6clh4\" (UniqueName: \"kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.561619 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.561663 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.561709 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.561802 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.562121 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.564002 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.564652 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.565333 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.565398 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.565949 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.602520 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6clh4\" (UniqueName: \"kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4\") pod \"dnsmasq-dns-5c9776ccc5-5kpzl\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.611006 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.634983 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.637582 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.641040 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.642935 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679559 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679616 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679709 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679732 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzrx7\" (UniqueName: \"kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679788 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.679922 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.680062 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783025 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783462 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzrx7\" (UniqueName: \"kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783493 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783529 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783586 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783632 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.783648 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.784322 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.797034 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.807094 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.815727 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.816647 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.817346 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.836107 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzrx7\" (UniqueName: \"kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7\") pod \"cinder-api-0\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " pod="openstack/cinder-api-0" Dec 04 15:57:01 crc kubenswrapper[4878]: I1204 15:57:01.877453 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.036828 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.245101 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerStarted","Data":"deba2c366a5daa8b8f2fa2de0ab7010cd1f89899cab7322f380cc39bf1d32b85"} Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.287665 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerStarted","Data":"2258530bdc5c84dfc80c66e7b99a20c8853c9882956360fa44fa2f58bbe9a93e"} Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.287948 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="ceilometer-notification-agent" containerID="cri-o://81666dd97513550e197f3d3cd884225e74466ba406d78bd83f5a6a4c31eff49b" gracePeriod=30 Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.288371 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.289167 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="sg-core" containerID="cri-o://e12662a6c3296e2b53d8ef791a69694b77a984f7444af44a0cbf2bf0dd62837a" gracePeriod=30 Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.289347 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="proxy-httpd" containerID="cri-o://2258530bdc5c84dfc80c66e7b99a20c8853c9882956360fa44fa2f58bbe9a93e" gracePeriod=30 Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.314482 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c5668dccb-gv79r"] Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.428037 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a" exitCode=0 Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.428241 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a"} Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.428293 4878 scope.go:117] "RemoveContainer" containerID="1e4b462af175e16fbdc402637b1b344ec58c91bfdf927904f1fcb6f988194d7e" Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.456680 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" event={"ID":"95fbd38b-d57f-4a7c-b185-8afc34ff86d4","Type":"ContainerStarted","Data":"681a6ad774cc72040d52efed4281cd3cb25bde0b2224a5c97fada78f79afd9fc"} Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.505513 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:02 crc kubenswrapper[4878]: I1204 15:57:02.926799 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.120176 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.563183 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5668dccb-gv79r" event={"ID":"460bb923-1a77-4759-98cb-b6262047cc27","Type":"ContainerStarted","Data":"894a2ea777fab6300b1be19d3ebc4990cb252ac1be5c7170b4d0b512257255b2"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.622037 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerStarted","Data":"761a42db79b571b7e613d1fa78be3844b383d246987072f4fc3eb425e4f7fdb1"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.625409 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerStarted","Data":"12d479aeab18ce168c1b07f3ff5cbde0f563a9ac1f1cfd90c10533b718365464"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.633584 4878 generic.go:334] "Generic (PLEG): container finished" podID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerID="2258530bdc5c84dfc80c66e7b99a20c8853c9882956360fa44fa2f58bbe9a93e" exitCode=0 Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.633626 4878 generic.go:334] "Generic (PLEG): container finished" podID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerID="e12662a6c3296e2b53d8ef791a69694b77a984f7444af44a0cbf2bf0dd62837a" exitCode=2 Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.633640 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerDied","Data":"2258530bdc5c84dfc80c66e7b99a20c8853c9882956360fa44fa2f58bbe9a93e"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.633722 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerDied","Data":"e12662a6c3296e2b53d8ef791a69694b77a984f7444af44a0cbf2bf0dd62837a"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.648093 4878 generic.go:334] "Generic (PLEG): container finished" podID="c11b03d2-f274-4022-924e-753fad2cf037" containerID="d7e38a4cad67fb1b7471e37b8a032f1c4f730d828e0290637a2b896ed0265b75" exitCode=137 Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.648226 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerDied","Data":"d7e38a4cad67fb1b7471e37b8a032f1c4f730d828e0290637a2b896ed0265b75"} Dec 04 15:57:03 crc kubenswrapper[4878]: I1204 15:57:03.663596 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" event={"ID":"032679f9-cf8f-4acf-8aea-37675bdf187d","Type":"ContainerStarted","Data":"bfcd250a858111b36c9bf6e38a92e5ce047813f8ebd257f424daae7ce83cb1a9"} Dec 04 15:57:03 crc kubenswrapper[4878]: W1204 15:57:03.741798 4878 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fbd38b_d57f_4a7c_b185_8afc34ff86d4.slice/crio-conmon-bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fbd38b_d57f_4a7c_b185_8afc34ff86d4.slice/crio-conmon-bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873.scope: no such file or directory Dec 04 15:57:03 crc kubenswrapper[4878]: W1204 15:57:03.749506 4878 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fbd38b_d57f_4a7c_b185_8afc34ff86d4.slice/crio-bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fbd38b_d57f_4a7c_b185_8afc34ff86d4.slice/crio-bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873.scope: no such file or directory Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.699380 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5668dccb-gv79r" event={"ID":"460bb923-1a77-4759-98cb-b6262047cc27","Type":"ContainerStarted","Data":"f6ddc5105e3b47c833b66b6fe846c096014500b10d2676ff884f66c1f0a7c662"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.709473 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerStarted","Data":"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.720808 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.730174 4878 generic.go:334] "Generic (PLEG): container finished" podID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerID="f57182e0325a3f4cea993a5106fc4a8a9b4ed7fa173d00d6c16e1a672eb26fe6" exitCode=137 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.730217 4878 generic.go:334] "Generic (PLEG): container finished" podID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerID="e5524de2dbe51dd2fd12c3179bf60588e2edf8c7b06e50ceefcfd5ea9e56503c" exitCode=137 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.730291 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerDied","Data":"f57182e0325a3f4cea993a5106fc4a8a9b4ed7fa173d00d6c16e1a672eb26fe6"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.730328 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerDied","Data":"e5524de2dbe51dd2fd12c3179bf60588e2edf8c7b06e50ceefcfd5ea9e56503c"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.741224 4878 generic.go:334] "Generic (PLEG): container finished" podID="c11b03d2-f274-4022-924e-753fad2cf037" containerID="9aae95ff4d2584deea3440217247b1816f5d9f207d945925aaee6e2f034ecc75" exitCode=137 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.741335 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerDied","Data":"9aae95ff4d2584deea3440217247b1816f5d9f207d945925aaee6e2f034ecc75"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.760393 4878 generic.go:334] "Generic (PLEG): container finished" podID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerID="006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63" exitCode=0 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.760530 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" event={"ID":"032679f9-cf8f-4acf-8aea-37675bdf187d","Type":"ContainerDied","Data":"006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.776906 4878 generic.go:334] "Generic (PLEG): container finished" podID="95fbd38b-d57f-4a7c-b185-8afc34ff86d4" containerID="bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873" exitCode=0 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.777076 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" event={"ID":"95fbd38b-d57f-4a7c-b185-8afc34ff86d4","Type":"ContainerDied","Data":"bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.787432 4878 generic.go:334] "Generic (PLEG): container finished" podID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerID="d6c7718074d8c6c3b9561dd2d424fec9a6e0dc30879f362728eccf812a7b0afb" exitCode=137 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.787486 4878 generic.go:334] "Generic (PLEG): container finished" podID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerID="5f38f6dd6d1f724a8516d792d7504d592c5aaad4e3c96b25e799e8f8f629ac5f" exitCode=137 Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.787499 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerDied","Data":"d6c7718074d8c6c3b9561dd2d424fec9a6e0dc30879f362728eccf812a7b0afb"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.787561 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerDied","Data":"5f38f6dd6d1f724a8516d792d7504d592c5aaad4e3c96b25e799e8f8f629ac5f"} Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.904485 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:57:04 crc kubenswrapper[4878]: I1204 15:57:04.920807 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:05 crc kubenswrapper[4878]: I1204 15:57:05.811065 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerStarted","Data":"c92be034b38feda116963df5d5a3c615eae4f2f2e5fbc7037881326b275d877f"} Dec 04 15:57:05 crc kubenswrapper[4878]: I1204 15:57:05.829475 4878 generic.go:334] "Generic (PLEG): container finished" podID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerID="81666dd97513550e197f3d3cd884225e74466ba406d78bd83f5a6a4c31eff49b" exitCode=0 Dec 04 15:57:05 crc kubenswrapper[4878]: I1204 15:57:05.830083 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerDied","Data":"81666dd97513550e197f3d3cd884225e74466ba406d78bd83f5a6a4c31eff49b"} Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.124400 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.135845 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.136507 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.306907 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb2cm\" (UniqueName: \"kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm\") pod \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.306990 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key\") pod \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307067 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307100 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcntr\" (UniqueName: \"kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307180 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs\") pod \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307291 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4jr\" (UniqueName: \"kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr\") pod \"c11b03d2-f274-4022-924e-753fad2cf037\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307331 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307389 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts\") pod \"c11b03d2-f274-4022-924e-753fad2cf037\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307407 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key\") pod \"c11b03d2-f274-4022-924e-753fad2cf037\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307427 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs\") pod \"c11b03d2-f274-4022-924e-753fad2cf037\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307487 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data\") pod \"c11b03d2-f274-4022-924e-753fad2cf037\" (UID: \"c11b03d2-f274-4022-924e-753fad2cf037\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307514 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts\") pod \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307562 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307588 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data\") pod \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\" (UID: \"64e9a31b-b17d-4589-a5fe-41f7ea2973b8\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307627 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.307662 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0\") pod \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\" (UID: \"95fbd38b-d57f-4a7c-b185-8afc34ff86d4\") " Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.314542 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm" (OuterVolumeSpecName: "kube-api-access-sb2cm") pod "64e9a31b-b17d-4589-a5fe-41f7ea2973b8" (UID: "64e9a31b-b17d-4589-a5fe-41f7ea2973b8"). InnerVolumeSpecName "kube-api-access-sb2cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.314991 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs" (OuterVolumeSpecName: "logs") pod "64e9a31b-b17d-4589-a5fe-41f7ea2973b8" (UID: "64e9a31b-b17d-4589-a5fe-41f7ea2973b8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.322370 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs" (OuterVolumeSpecName: "logs") pod "c11b03d2-f274-4022-924e-753fad2cf037" (UID: "c11b03d2-f274-4022-924e-753fad2cf037"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.443438 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb2cm\" (UniqueName: \"kubernetes.io/projected/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-kube-api-access-sb2cm\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.443479 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.443492 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c11b03d2-f274-4022-924e-753fad2cf037-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.555208 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.555259 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data" (OuterVolumeSpecName: "config-data") pod "c11b03d2-f274-4022-924e-753fad2cf037" (UID: "c11b03d2-f274-4022-924e-753fad2cf037"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.566224 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.635247 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.635749 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data" (OuterVolumeSpecName: "config-data") pod "64e9a31b-b17d-4589-a5fe-41f7ea2973b8" (UID: "64e9a31b-b17d-4589-a5fe-41f7ea2973b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.655860 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.655930 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.655941 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.655950 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.655961 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.672616 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts" (OuterVolumeSpecName: "scripts") pod "c11b03d2-f274-4022-924e-753fad2cf037" (UID: "c11b03d2-f274-4022-924e-753fad2cf037"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.682434 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "c11b03d2-f274-4022-924e-753fad2cf037" (UID: "c11b03d2-f274-4022-924e-753fad2cf037"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.683022 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config" (OuterVolumeSpecName: "config") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.683834 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr" (OuterVolumeSpecName: "kube-api-access-tg4jr") pod "c11b03d2-f274-4022-924e-753fad2cf037" (UID: "c11b03d2-f274-4022-924e-753fad2cf037"). InnerVolumeSpecName "kube-api-access-tg4jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.686186 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr" (OuterVolumeSpecName: "kube-api-access-zcntr") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "kube-api-access-zcntr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.686260 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "64e9a31b-b17d-4589-a5fe-41f7ea2973b8" (UID: "64e9a31b-b17d-4589-a5fe-41f7ea2973b8"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.698493 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts" (OuterVolumeSpecName: "scripts") pod "64e9a31b-b17d-4589-a5fe-41f7ea2973b8" (UID: "64e9a31b-b17d-4589-a5fe-41f7ea2973b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.730044 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95fbd38b-d57f-4a7c-b185-8afc34ff86d4" (UID: "95fbd38b-d57f-4a7c-b185-8afc34ff86d4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757013 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4jr\" (UniqueName: \"kubernetes.io/projected/c11b03d2-f274-4022-924e-753fad2cf037-kube-api-access-tg4jr\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757057 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757068 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c11b03d2-f274-4022-924e-753fad2cf037-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757079 4878 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/c11b03d2-f274-4022-924e-753fad2cf037-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757090 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757098 4878 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/64e9a31b-b17d-4589-a5fe-41f7ea2973b8-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757106 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.757118 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcntr\" (UniqueName: \"kubernetes.io/projected/95fbd38b-d57f-4a7c-b185-8afc34ff86d4-kube-api-access-zcntr\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.846617 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b9b4f5745-7mt44" event={"ID":"64e9a31b-b17d-4589-a5fe-41f7ea2973b8","Type":"ContainerDied","Data":"94c29e7d06e78ffd67e338498eabae48c63e2e3e07da42a24d6beb15f4d98135"} Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.846751 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b9b4f5745-7mt44" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.851067 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-59844c56b5-mjb67" event={"ID":"c11b03d2-f274-4022-924e-753fad2cf037","Type":"ContainerDied","Data":"6b15b701116cb2a057d141b9baa6c631e14c3c554ff8aa37896a23b425119c0e"} Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.851269 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-59844c56b5-mjb67" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.856077 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" event={"ID":"95fbd38b-d57f-4a7c-b185-8afc34ff86d4","Type":"ContainerDied","Data":"681a6ad774cc72040d52efed4281cd3cb25bde0b2224a5c97fada78f79afd9fc"} Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.856135 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-cm9wt" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.931943 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.932437 4878 scope.go:117] "RemoveContainer" containerID="d6c7718074d8c6c3b9561dd2d424fec9a6e0dc30879f362728eccf812a7b0afb" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.947401 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.950948 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-59844c56b5-mjb67"] Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.964410 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:57:06 crc kubenswrapper[4878]: I1204 15:57:06.973694 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b9b4f5745-7mt44"] Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.048009 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.065755 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs\") pod \"e85096ea-b51a-4cda-a48b-fe63910073bb\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.065860 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts\") pod \"e85096ea-b51a-4cda-a48b-fe63910073bb\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.066006 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zfv7\" (UniqueName: \"kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7\") pod \"e85096ea-b51a-4cda-a48b-fe63910073bb\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.066080 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data\") pod \"e85096ea-b51a-4cda-a48b-fe63910073bb\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.066225 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key\") pod \"e85096ea-b51a-4cda-a48b-fe63910073bb\" (UID: \"e85096ea-b51a-4cda-a48b-fe63910073bb\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.073322 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7" (OuterVolumeSpecName: "kube-api-access-7zfv7") pod "e85096ea-b51a-4cda-a48b-fe63910073bb" (UID: "e85096ea-b51a-4cda-a48b-fe63910073bb"). InnerVolumeSpecName "kube-api-access-7zfv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.080010 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs" (OuterVolumeSpecName: "logs") pod "e85096ea-b51a-4cda-a48b-fe63910073bb" (UID: "e85096ea-b51a-4cda-a48b-fe63910073bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.081571 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e85096ea-b51a-4cda-a48b-fe63910073bb" (UID: "e85096ea-b51a-4cda-a48b-fe63910073bb"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.115653 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts" (OuterVolumeSpecName: "scripts") pod "e85096ea-b51a-4cda-a48b-fe63910073bb" (UID: "e85096ea-b51a-4cda-a48b-fe63910073bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.132956 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data" (OuterVolumeSpecName: "config-data") pod "e85096ea-b51a-4cda-a48b-fe63910073bb" (UID: "e85096ea-b51a-4cda-a48b-fe63910073bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.133047 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-cm9wt"] Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.170543 4878 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e85096ea-b51a-4cda-a48b-fe63910073bb-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.170606 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e85096ea-b51a-4cda-a48b-fe63910073bb-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.170618 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.170632 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zfv7\" (UniqueName: \"kubernetes.io/projected/e85096ea-b51a-4cda-a48b-fe63910073bb-kube-api-access-7zfv7\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.171418 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e85096ea-b51a-4cda-a48b-fe63910073bb-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.227124 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" path="/var/lib/kubelet/pods/64e9a31b-b17d-4589-a5fe-41f7ea2973b8/volumes" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.230293 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.242150 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95fbd38b_d57f_4a7c_b185_8afc34ff86d4.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11b03d2_f274_4022_924e_753fad2cf037.slice/crio-6b15b701116cb2a057d141b9baa6c631e14c3c554ff8aa37896a23b425119c0e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc11b03d2_f274_4022_924e_753fad2cf037.slice\": RecentStats: unable to find data in memory cache]" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.249787 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fbd38b-d57f-4a7c-b185-8afc34ff86d4" path="/var/lib/kubelet/pods/95fbd38b-d57f-4a7c-b185-8afc34ff86d4/volumes" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.252342 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c11b03d2-f274-4022-924e-753fad2cf037" path="/var/lib/kubelet/pods/c11b03d2-f274-4022-924e-753fad2cf037/volumes" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.263331 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f87cb9798-k84k9"] Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.264398 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.264490 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.264570 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.264641 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.264753 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.264813 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.264898 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.264960 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265036 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265120 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265209 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="proxy-httpd" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265291 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="proxy-httpd" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265382 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="ceilometer-notification-agent" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265467 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="ceilometer-notification-agent" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265540 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fbd38b-d57f-4a7c-b185-8afc34ff86d4" containerName="init" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265593 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fbd38b-d57f-4a7c-b185-8afc34ff86d4" containerName="init" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265655 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="sg-core" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265708 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="sg-core" Dec 04 15:57:07 crc kubenswrapper[4878]: E1204 15:57:07.265792 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.265899 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266229 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266338 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266415 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266506 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="proxy-httpd" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266579 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fbd38b-d57f-4a7c-b185-8afc34ff86d4" containerName="init" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266712 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266799 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="64e9a31b-b17d-4589-a5fe-41f7ea2973b8" containerName="horizon" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266903 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="sg-core" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.266984 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c11b03d2-f274-4022-924e-753fad2cf037" containerName="horizon-log" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.267096 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" containerName="ceilometer-notification-agent" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.268590 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f87cb9798-k84k9"] Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.269645 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.277702 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.277851 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.277906 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkrtc\" (UniqueName: \"kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.277982 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.278008 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.278033 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.278205 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts\") pod \"a9f20b46-41f4-4a66-a21c-d187f50fe664\" (UID: \"a9f20b46-41f4-4a66-a21c-d187f50fe664\") " Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279244 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data-custom\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279329 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95965d0-357e-422a-ab31-186d9dce897b-logs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279378 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64x7m\" (UniqueName: \"kubernetes.io/projected/a95965d0-357e-422a-ab31-186d9dce897b-kube-api-access-64x7m\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279444 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-internal-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279520 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279658 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-combined-ca-bundle\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279689 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-public-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279727 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.279920 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.283438 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.283471 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9f20b46-41f4-4a66-a21c-d187f50fe664-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.283896 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.284203 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.305306 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts" (OuterVolumeSpecName: "scripts") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.305377 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc" (OuterVolumeSpecName: "kube-api-access-qkrtc") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "kube-api-access-qkrtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.349097 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.358537 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385579 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-combined-ca-bundle\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385634 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-public-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385713 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data-custom\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385745 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95965d0-357e-422a-ab31-186d9dce897b-logs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385777 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64x7m\" (UniqueName: \"kubernetes.io/projected/a95965d0-357e-422a-ab31-186d9dce897b-kube-api-access-64x7m\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385813 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-internal-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385848 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385945 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385958 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkrtc\" (UniqueName: \"kubernetes.io/projected/a9f20b46-41f4-4a66-a21c-d187f50fe664-kube-api-access-qkrtc\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385973 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.385983 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.387321 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95965d0-357e-422a-ab31-186d9dce897b-logs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.394842 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-internal-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.395131 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data-custom\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.395540 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-combined-ca-bundle\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.397678 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-public-tls-certs\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.401502 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95965d0-357e-422a-ab31-186d9dce897b-config-data\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.406127 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64x7m\" (UniqueName: \"kubernetes.io/projected/a95965d0-357e-422a-ab31-186d9dce897b-kube-api-access-64x7m\") pod \"barbican-api-5f87cb9798-k84k9\" (UID: \"a95965d0-357e-422a-ab31-186d9dce897b\") " pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.407340 4878 scope.go:117] "RemoveContainer" containerID="5f38f6dd6d1f724a8516d792d7504d592c5aaad4e3c96b25e799e8f8f629ac5f" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.469171 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data" (OuterVolumeSpecName: "config-data") pod "a9f20b46-41f4-4a66-a21c-d187f50fe664" (UID: "a9f20b46-41f4-4a66-a21c-d187f50fe664"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.474655 4878 scope.go:117] "RemoveContainer" containerID="9aae95ff4d2584deea3440217247b1816f5d9f207d945925aaee6e2f034ecc75" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.510934 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9f20b46-41f4-4a66-a21c-d187f50fe664-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.676805 4878 scope.go:117] "RemoveContainer" containerID="d7e38a4cad67fb1b7471e37b8a032f1c4f730d828e0290637a2b896ed0265b75" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.707469 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.727154 4878 scope.go:117] "RemoveContainer" containerID="bf090d400c24d548ef4742a34e93bdef9a7001ee37d1537fc812cf8aa42f3873" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.915130 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7787668ff9-nlh6p" event={"ID":"e85096ea-b51a-4cda-a48b-fe63910073bb","Type":"ContainerDied","Data":"06d00462f632a4353398957f180c5c5fda1817539563b96f17d242b440b1c7a0"} Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.915201 4878 scope.go:117] "RemoveContainer" containerID="f57182e0325a3f4cea993a5106fc4a8a9b4ed7fa173d00d6c16e1a672eb26fe6" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.915359 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7787668ff9-nlh6p" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.963863 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.967668 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7c7d556697-lmlhb" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.973492 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.984563 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerStarted","Data":"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917"} Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.985496 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:07 crc kubenswrapper[4878]: I1204 15:57:07.985547 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.023337 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7787668ff9-nlh6p"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.049094 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9f20b46-41f4-4a66-a21c-d187f50fe664","Type":"ContainerDied","Data":"c9a2c697b0494970ff2f0e0d2ad06259d7ff35ee510a7a46977960f654857384"} Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.049336 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.090272 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" podStartSLOduration=7.090247584 podStartE2EDuration="7.090247584s" podCreationTimestamp="2025-12-04 15:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:08.047859659 +0000 UTC m=+1272.010396615" watchObservedRunningTime="2025-12-04 15:57:08.090247584 +0000 UTC m=+1272.052784530" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.203196 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c5668dccb-gv79r" event={"ID":"460bb923-1a77-4759-98cb-b6262047cc27","Type":"ContainerStarted","Data":"7910e278ccaf98c9036ab0b2c4952b34f7008e6dddcd78dca55b7b825e0e58ab"} Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.205066 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.205630 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.207974 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.208365 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85684bb58-xxv4g" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-api" containerID="cri-o://7fbf0e0c50b3f86a7a7ebb6cc4ecaf7cdc4438812a86484491748d59f6b89d8b" gracePeriod=30 Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.208635 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85684bb58-xxv4g" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-httpd" containerID="cri-o://c723caafaef6bec6a4253ec63948917b828ec31d3cb066393337d7aefd97f02e" gracePeriod=30 Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.209620 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75f5766fbb-8lpst" podStartSLOduration=9.209607072 podStartE2EDuration="9.209607072s" podCreationTimestamp="2025-12-04 15:56:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:08.090952882 +0000 UTC m=+1272.053489838" watchObservedRunningTime="2025-12-04 15:57:08.209607072 +0000 UTC m=+1272.172144028" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.398533 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.413156 4878 scope.go:117] "RemoveContainer" containerID="e5524de2dbe51dd2fd12c3179bf60588e2edf8c7b06e50ceefcfd5ea9e56503c" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.446105 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.496960 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.500834 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.533514 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.533695 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.552766 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.570341 4878 scope.go:117] "RemoveContainer" containerID="2258530bdc5c84dfc80c66e7b99a20c8853c9882956360fa44fa2f58bbe9a93e" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577539 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577605 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577649 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577714 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75xdz\" (UniqueName: \"kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577744 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577792 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.577814 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.582128 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c5668dccb-gv79r" podStartSLOduration=8.582100429 podStartE2EDuration="8.582100429s" podCreationTimestamp="2025-12-04 15:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:08.396338503 +0000 UTC m=+1272.358875479" watchObservedRunningTime="2025-12-04 15:57:08.582100429 +0000 UTC m=+1272.544637385" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.651304 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f87cb9798-k84k9"] Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.687725 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.687820 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.687900 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.687983 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75xdz\" (UniqueName: \"kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.688024 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.688057 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.688083 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.690418 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.693717 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.700551 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.702081 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.704683 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.716063 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.732287 4878 scope.go:117] "RemoveContainer" containerID="e12662a6c3296e2b53d8ef791a69694b77a984f7444af44a0cbf2bf0dd62837a" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.732736 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75xdz\" (UniqueName: \"kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz\") pod \"ceilometer-0\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.895930 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:08 crc kubenswrapper[4878]: I1204 15:57:08.955815 4878 scope.go:117] "RemoveContainer" containerID="81666dd97513550e197f3d3cd884225e74466ba406d78bd83f5a6a4c31eff49b" Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.208176 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f20b46-41f4-4a66-a21c-d187f50fe664" path="/var/lib/kubelet/pods/a9f20b46-41f4-4a66-a21c-d187f50fe664/volumes" Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.213569 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85096ea-b51a-4cda-a48b-fe63910073bb" path="/var/lib/kubelet/pods/e85096ea-b51a-4cda-a48b-fe63910073bb/volumes" Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.227232 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" event={"ID":"9a85aaed-250a-44a2-aa46-3ca586b53e2b","Type":"ContainerStarted","Data":"a46acafaabf6e4ed9afa0eb3d32cc4b27ce17500fe0045733ba728483083e3f6"} Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.232827 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" event={"ID":"032679f9-cf8f-4acf-8aea-37675bdf187d","Type":"ContainerStarted","Data":"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e"} Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.239754 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f87cb9798-k84k9" event={"ID":"a95965d0-357e-422a-ab31-186d9dce897b","Type":"ContainerStarted","Data":"2b4f102a1c4530339df5500dcba024576ea9d49d4a5f3614cecb1358753c6368"} Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.245126 4878 generic.go:334] "Generic (PLEG): container finished" podID="548da95d-a291-478d-b9f6-c3b62b110de3" containerID="c723caafaef6bec6a4253ec63948917b828ec31d3cb066393337d7aefd97f02e" exitCode=0 Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.245182 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerDied","Data":"c723caafaef6bec6a4253ec63948917b828ec31d3cb066393337d7aefd97f02e"} Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.251961 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5958c7964f-4fxmd" event={"ID":"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0","Type":"ContainerStarted","Data":"0350e26180510005876244b0d34ffdc59ac3fbafd7dd792e05ec92a02d2e80d2"} Dec 04 15:57:09 crc kubenswrapper[4878]: I1204 15:57:09.695952 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.170814 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.327416 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerStarted","Data":"fd975625ad8c55eac7143890c492dab09d7e94230a90f51c5a4ca4ff080e5e34"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.356078 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" event={"ID":"9a85aaed-250a-44a2-aa46-3ca586b53e2b","Type":"ContainerStarted","Data":"9c1b21baa3e7fb3c7ed90236d327bd3e5bd762ec47f773a104b95ec4861cd30a"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.364357 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f87cb9798-k84k9" event={"ID":"a95965d0-357e-422a-ab31-186d9dce897b","Type":"ContainerStarted","Data":"70922067c87effab57e9b6d26d86cb4d33cc453f6eae1e5d7f15da2c827180bb"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.364440 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f87cb9798-k84k9" event={"ID":"a95965d0-357e-422a-ab31-186d9dce897b","Type":"ContainerStarted","Data":"995f3781abb1881943be106e07cb7a3e0029b9f67e1b40a04985077f031bccd1"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.365280 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.365405 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.387695 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-c7d97c576-6crcc" podStartSLOduration=5.451351118 podStartE2EDuration="12.387672685s" podCreationTimestamp="2025-12-04 15:56:58 +0000 UTC" firstStartedPulling="2025-12-04 15:57:00.792954779 +0000 UTC m=+1264.755491735" lastFinishedPulling="2025-12-04 15:57:07.729276346 +0000 UTC m=+1271.691813302" observedRunningTime="2025-12-04 15:57:10.386721881 +0000 UTC m=+1274.349258837" watchObservedRunningTime="2025-12-04 15:57:10.387672685 +0000 UTC m=+1274.350209641" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.424549 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerStarted","Data":"b160ac46d56c87fc8dbc40d8022858c2e934ec71a9ddde6b358199f33481730f"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.425178 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api-log" containerID="cri-o://c92be034b38feda116963df5d5a3c615eae4f2f2e5fbc7037881326b275d877f" gracePeriod=30 Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.425473 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.425788 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api" containerID="cri-o://b160ac46d56c87fc8dbc40d8022858c2e934ec71a9ddde6b358199f33481730f" gracePeriod=30 Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.438777 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f87cb9798-k84k9" podStartSLOduration=3.438750938 podStartE2EDuration="3.438750938s" podCreationTimestamp="2025-12-04 15:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:10.421753811 +0000 UTC m=+1274.384290757" watchObservedRunningTime="2025-12-04 15:57:10.438750938 +0000 UTC m=+1274.401287894" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.452284 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerStarted","Data":"474309a47ad7719ef89c64136273e43aef6422b0ddbd27f22b41904288ee9fa6"} Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.471267 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.471229674 podStartE2EDuration="9.471229674s" podCreationTimestamp="2025-12-04 15:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:10.449791486 +0000 UTC m=+1274.412328442" watchObservedRunningTime="2025-12-04 15:57:10.471229674 +0000 UTC m=+1274.433766650" Dec 04 15:57:10 crc kubenswrapper[4878]: I1204 15:57:10.479074 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5958c7964f-4fxmd" event={"ID":"a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0","Type":"ContainerStarted","Data":"f0f01c73bc77456f3c0158da6519f35a4fb011c152db7c5d869801c673cf0310"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.521909 4878 generic.go:334] "Generic (PLEG): container finished" podID="548da95d-a291-478d-b9f6-c3b62b110de3" containerID="7fbf0e0c50b3f86a7a7ebb6cc4ecaf7cdc4438812a86484491748d59f6b89d8b" exitCode=0 Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.522502 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerDied","Data":"7fbf0e0c50b3f86a7a7ebb6cc4ecaf7cdc4438812a86484491748d59f6b89d8b"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.540106 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerStarted","Data":"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.562093 4878 generic.go:334] "Generic (PLEG): container finished" podID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerID="b160ac46d56c87fc8dbc40d8022858c2e934ec71a9ddde6b358199f33481730f" exitCode=0 Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.562383 4878 generic.go:334] "Generic (PLEG): container finished" podID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerID="c92be034b38feda116963df5d5a3c615eae4f2f2e5fbc7037881326b275d877f" exitCode=143 Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.562519 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerDied","Data":"b160ac46d56c87fc8dbc40d8022858c2e934ec71a9ddde6b358199f33481730f"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.562608 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerDied","Data":"c92be034b38feda116963df5d5a3c615eae4f2f2e5fbc7037881326b275d877f"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.616658 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerStarted","Data":"afbc07672892aef61704ec56c94bda4b747ea65cbaf29b1ec56332aaaafd42e4"} Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.634566 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.663350 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=6.193747565 podStartE2EDuration="10.663330178s" podCreationTimestamp="2025-12-04 15:57:01 +0000 UTC" firstStartedPulling="2025-12-04 15:57:02.571401613 +0000 UTC m=+1266.533938569" lastFinishedPulling="2025-12-04 15:57:07.040984226 +0000 UTC m=+1271.003521182" observedRunningTime="2025-12-04 15:57:11.651324777 +0000 UTC m=+1275.613861733" watchObservedRunningTime="2025-12-04 15:57:11.663330178 +0000 UTC m=+1275.625867134" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.664254 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5958c7964f-4fxmd" podStartSLOduration=6.491039103 podStartE2EDuration="13.664244521s" podCreationTimestamp="2025-12-04 15:56:58 +0000 UTC" firstStartedPulling="2025-12-04 15:57:00.234214943 +0000 UTC m=+1264.196751899" lastFinishedPulling="2025-12-04 15:57:07.407420371 +0000 UTC m=+1271.369957317" observedRunningTime="2025-12-04 15:57:10.511266009 +0000 UTC m=+1274.473802975" watchObservedRunningTime="2025-12-04 15:57:11.664244521 +0000 UTC m=+1275.626781477" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.783538 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811512 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811576 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config\") pod \"548da95d-a291-478d-b9f6-c3b62b110de3\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811688 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811735 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811780 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzrx7\" (UniqueName: \"kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811837 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle\") pod \"548da95d-a291-478d-b9f6-c3b62b110de3\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811903 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqcf\" (UniqueName: \"kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf\") pod \"548da95d-a291-478d-b9f6-c3b62b110de3\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.811993 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.812090 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs\") pod \"548da95d-a291-478d-b9f6-c3b62b110de3\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.812133 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.812157 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id\") pod \"edc930af-1628-46e9-8aa1-69eb569e5fe4\" (UID: \"edc930af-1628-46e9-8aa1-69eb569e5fe4\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.812200 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config\") pod \"548da95d-a291-478d-b9f6-c3b62b110de3\" (UID: \"548da95d-a291-478d-b9f6-c3b62b110de3\") " Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.819101 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs" (OuterVolumeSpecName: "logs") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.819227 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.826088 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts" (OuterVolumeSpecName: "scripts") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.828200 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "548da95d-a291-478d-b9f6-c3b62b110de3" (UID: "548da95d-a291-478d-b9f6-c3b62b110de3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.835537 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7" (OuterVolumeSpecName: "kube-api-access-hzrx7") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "kube-api-access-hzrx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.837056 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.848681 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf" (OuterVolumeSpecName: "kube-api-access-txqcf") pod "548da95d-a291-478d-b9f6-c3b62b110de3" (UID: "548da95d-a291-478d-b9f6-c3b62b110de3"). InnerVolumeSpecName "kube-api-access-txqcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.895112 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.914948 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.914986 4878 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.914995 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.915006 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzrx7\" (UniqueName: \"kubernetes.io/projected/edc930af-1628-46e9-8aa1-69eb569e5fe4-kube-api-access-hzrx7\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.915020 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqcf\" (UniqueName: \"kubernetes.io/projected/548da95d-a291-478d-b9f6-c3b62b110de3-kube-api-access-txqcf\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.915028 4878 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.915036 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/edc930af-1628-46e9-8aa1-69eb569e5fe4-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.915045 4878 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/edc930af-1628-46e9-8aa1-69eb569e5fe4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.939433 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "548da95d-a291-478d-b9f6-c3b62b110de3" (UID: "548da95d-a291-478d-b9f6-c3b62b110de3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.943616 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "548da95d-a291-478d-b9f6-c3b62b110de3" (UID: "548da95d-a291-478d-b9f6-c3b62b110de3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.977790 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config" (OuterVolumeSpecName: "config") pod "548da95d-a291-478d-b9f6-c3b62b110de3" (UID: "548da95d-a291-478d-b9f6-c3b62b110de3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:11 crc kubenswrapper[4878]: I1204 15:57:11.987080 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data" (OuterVolumeSpecName: "config-data") pod "edc930af-1628-46e9-8aa1-69eb569e5fe4" (UID: "edc930af-1628-46e9-8aa1-69eb569e5fe4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.018340 4878 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.018643 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.018709 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edc930af-1628-46e9-8aa1-69eb569e5fe4-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.018777 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/548da95d-a291-478d-b9f6-c3b62b110de3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.684713 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerStarted","Data":"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0"} Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.686748 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerStarted","Data":"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2"} Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.690662 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"edc930af-1628-46e9-8aa1-69eb569e5fe4","Type":"ContainerDied","Data":"761a42db79b571b7e613d1fa78be3844b383d246987072f4fc3eb425e4f7fdb1"} Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.690791 4878 scope.go:117] "RemoveContainer" containerID="b160ac46d56c87fc8dbc40d8022858c2e934ec71a9ddde6b358199f33481730f" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.691140 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.707213 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85684bb58-xxv4g" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.708312 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85684bb58-xxv4g" event={"ID":"548da95d-a291-478d-b9f6-c3b62b110de3","Type":"ContainerDied","Data":"0eabb792d30dc4c25df13ebb4a9fc57bddd9b7f6c1c31179cbf5000380e21888"} Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.721465 4878 scope.go:117] "RemoveContainer" containerID="c92be034b38feda116963df5d5a3c615eae4f2f2e5fbc7037881326b275d877f" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.760808 4878 scope.go:117] "RemoveContainer" containerID="c723caafaef6bec6a4253ec63948917b828ec31d3cb066393337d7aefd97f02e" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.782227 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.802191 4878 scope.go:117] "RemoveContainer" containerID="7fbf0e0c50b3f86a7a7ebb6cc4ecaf7cdc4438812a86484491748d59f6b89d8b" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.839065 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.880564 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:12 crc kubenswrapper[4878]: E1204 15:57:12.882173 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-httpd" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.882204 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-httpd" Dec 04 15:57:12 crc kubenswrapper[4878]: E1204 15:57:12.882242 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.882256 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api" Dec 04 15:57:12 crc kubenswrapper[4878]: E1204 15:57:12.882523 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-api" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.882535 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-api" Dec 04 15:57:12 crc kubenswrapper[4878]: E1204 15:57:12.882568 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api-log" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.882576 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api-log" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.883093 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.883127 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-api" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.883193 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" containerName="neutron-httpd" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.883217 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" containerName="cinder-api-log" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.886920 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.893686 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.893699 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.902705 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.915408 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.929585 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85684bb58-xxv4g"] Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.939626 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.988730 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b95fffa-975c-44f0-ae14-d0ac3bd06053-logs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.988806 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.988884 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b95fffa-975c-44f0-ae14-d0ac3bd06053-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.988910 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnnpl\" (UniqueName: \"kubernetes.io/projected/1b95fffa-975c-44f0-ae14-d0ac3bd06053-kube-api-access-bnnpl\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.989115 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.989331 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.989391 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-scripts\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.989565 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:12 crc kubenswrapper[4878]: I1204 15:57:12.989656 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.091824 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.091930 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-scripts\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092012 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092045 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092101 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b95fffa-975c-44f0-ae14-d0ac3bd06053-logs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092135 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092187 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b95fffa-975c-44f0-ae14-d0ac3bd06053-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092215 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnpl\" (UniqueName: \"kubernetes.io/projected/1b95fffa-975c-44f0-ae14-d0ac3bd06053-kube-api-access-bnnpl\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.092247 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.094088 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1b95fffa-975c-44f0-ae14-d0ac3bd06053-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.095183 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b95fffa-975c-44f0-ae14-d0ac3bd06053-logs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.103904 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.104084 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-scripts\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.104232 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data-custom\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.103928 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-public-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.104549 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.105271 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b95fffa-975c-44f0-ae14-d0ac3bd06053-config-data\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.116531 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnpl\" (UniqueName: \"kubernetes.io/projected/1b95fffa-975c-44f0-ae14-d0ac3bd06053-kube-api-access-bnnpl\") pod \"cinder-api-0\" (UID: \"1b95fffa-975c-44f0-ae14-d0ac3bd06053\") " pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.130793 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.193551 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="548da95d-a291-478d-b9f6-c3b62b110de3" path="/var/lib/kubelet/pods/548da95d-a291-478d-b9f6-c3b62b110de3/volumes" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.194659 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edc930af-1628-46e9-8aa1-69eb569e5fe4" path="/var/lib/kubelet/pods/edc930af-1628-46e9-8aa1-69eb569e5fe4/volumes" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.230829 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 04 15:57:13 crc kubenswrapper[4878]: I1204 15:57:13.808511 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.105059 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.284946 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.732003 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b95fffa-975c-44f0-ae14-d0ac3bd06053","Type":"ContainerStarted","Data":"341e84cc22f9d7437e199785802c4d3fd17d4469aaeb0e262a2047cdbe6dfca4"} Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.734936 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerStarted","Data":"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d"} Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.735396 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.773270 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.598581882 podStartE2EDuration="6.773241938s" podCreationTimestamp="2025-12-04 15:57:08 +0000 UTC" firstStartedPulling="2025-12-04 15:57:09.713081799 +0000 UTC m=+1273.675618755" lastFinishedPulling="2025-12-04 15:57:13.887741855 +0000 UTC m=+1277.850278811" observedRunningTime="2025-12-04 15:57:14.764509869 +0000 UTC m=+1278.727046825" watchObservedRunningTime="2025-12-04 15:57:14.773241938 +0000 UTC m=+1278.735778894" Dec 04 15:57:14 crc kubenswrapper[4878]: I1204 15:57:14.975663 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:15 crc kubenswrapper[4878]: I1204 15:57:15.754555 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b95fffa-975c-44f0-ae14-d0ac3bd06053","Type":"ContainerStarted","Data":"8dad26e1ccce535a7a9eae4b8ec4342f93ccd727321f171d0988772704dcaa19"} Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.171332 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.277401 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6c56cbf696-wj6zc" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.356126 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.482437 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.787853 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1b95fffa-975c-44f0-ae14-d0ac3bd06053","Type":"ContainerStarted","Data":"8911dc4a337f53feceeff68e1fa856eb22bc6239694decc333fb0e5ed403b49c"} Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.788111 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon-log" containerID="cri-o://6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0" gracePeriod=30 Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.788295 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" containerID="cri-o://943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2" gracePeriod=30 Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.815472 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.815445908 podStartE2EDuration="4.815445908s" podCreationTimestamp="2025-12-04 15:57:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:16.814205657 +0000 UTC m=+1280.776742633" watchObservedRunningTime="2025-12-04 15:57:16.815445908 +0000 UTC m=+1280.777982864" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.880597 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.886719 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.961330 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.961736 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="dnsmasq-dns" containerID="cri-o://c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d" gracePeriod=10 Dec 04 15:57:16 crc kubenswrapper[4878]: I1204 15:57:16.985732 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.549808 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.635708 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9qz9\" (UniqueName: \"kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.635799 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.635837 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.635950 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.635998 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.636092 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc\") pod \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\" (UID: \"b0d830a5-d873-4309-abfc-5354c3dfe4ef\") " Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.659058 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9" (OuterVolumeSpecName: "kube-api-access-g9qz9") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "kube-api-access-g9qz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.711894 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.720006 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.725713 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.732236 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config" (OuterVolumeSpecName: "config") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740498 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b0d830a5-d873-4309-abfc-5354c3dfe4ef" (UID: "b0d830a5-d873-4309-abfc-5354c3dfe4ef"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740638 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740698 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740714 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9qz9\" (UniqueName: \"kubernetes.io/projected/b0d830a5-d873-4309-abfc-5354c3dfe4ef-kube-api-access-g9qz9\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740734 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.740748 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.807778 4878 generic.go:334] "Generic (PLEG): container finished" podID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerID="c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d" exitCode=0 Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.808093 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="cinder-scheduler" containerID="cri-o://474309a47ad7719ef89c64136273e43aef6422b0ddbd27f22b41904288ee9fa6" gracePeriod=30 Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.808531 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.815278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" event={"ID":"b0d830a5-d873-4309-abfc-5354c3dfe4ef","Type":"ContainerDied","Data":"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d"} Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.815343 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-wsxfk" event={"ID":"b0d830a5-d873-4309-abfc-5354c3dfe4ef","Type":"ContainerDied","Data":"718bf442d1f105860dd8692407712ada62ee6768da1e054380a7a27e15b24750"} Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.815316 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="probe" containerID="cri-o://afbc07672892aef61704ec56c94bda4b747ea65cbaf29b1ec56332aaaafd42e4" gracePeriod=30 Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.815371 4878 scope.go:117] "RemoveContainer" containerID="c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.816066 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.844482 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b0d830a5-d873-4309-abfc-5354c3dfe4ef-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.853264 4878 scope.go:117] "RemoveContainer" containerID="633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.871660 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.879301 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-wsxfk"] Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.886473 4878 scope.go:117] "RemoveContainer" containerID="c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d" Dec 04 15:57:17 crc kubenswrapper[4878]: E1204 15:57:17.888726 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d\": container with ID starting with c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d not found: ID does not exist" containerID="c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.888773 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d"} err="failed to get container status \"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d\": rpc error: code = NotFound desc = could not find container \"c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d\": container with ID starting with c2839465f875be7bf6ebfdf2cf02fc1bc48455f531095e8f2856f26786e9440d not found: ID does not exist" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.888811 4878 scope.go:117] "RemoveContainer" containerID="633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3" Dec 04 15:57:17 crc kubenswrapper[4878]: E1204 15:57:17.889370 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3\": container with ID starting with 633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3 not found: ID does not exist" containerID="633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3" Dec 04 15:57:17 crc kubenswrapper[4878]: I1204 15:57:17.889420 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3"} err="failed to get container status \"633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3\": rpc error: code = NotFound desc = could not find container \"633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3\": container with ID starting with 633b6bb7f471dfa53f581f7eaf8e1a79602928ad9a81628e3e9d2877b546a8c3 not found: ID does not exist" Dec 04 15:57:18 crc kubenswrapper[4878]: I1204 15:57:18.821751 4878 generic.go:334] "Generic (PLEG): container finished" podID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerID="afbc07672892aef61704ec56c94bda4b747ea65cbaf29b1ec56332aaaafd42e4" exitCode=0 Dec 04 15:57:18 crc kubenswrapper[4878]: I1204 15:57:18.821896 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerDied","Data":"afbc07672892aef61704ec56c94bda4b747ea65cbaf29b1ec56332aaaafd42e4"} Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.193296 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" path="/var/lib/kubelet/pods/b0d830a5-d873-4309-abfc-5354c3dfe4ef/volumes" Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.560591 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.694387 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f87cb9798-k84k9" Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.772846 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.773446 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f5766fbb-8lpst" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api-log" containerID="cri-o://a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7" gracePeriod=30 Dec 04 15:57:19 crc kubenswrapper[4878]: I1204 15:57:19.773806 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f5766fbb-8lpst" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api" containerID="cri-o://80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917" gracePeriod=30 Dec 04 15:57:20 crc kubenswrapper[4878]: I1204 15:57:20.857854 4878 generic.go:334] "Generic (PLEG): container finished" podID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerID="943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2" exitCode=0 Dec 04 15:57:20 crc kubenswrapper[4878]: I1204 15:57:20.857999 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerDied","Data":"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2"} Dec 04 15:57:20 crc kubenswrapper[4878]: I1204 15:57:20.860530 4878 generic.go:334] "Generic (PLEG): container finished" podID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerID="a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7" exitCode=143 Dec 04 15:57:20 crc kubenswrapper[4878]: I1204 15:57:20.860569 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerDied","Data":"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7"} Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.007848 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.134042 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-96bf8d55-s7dcq" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.503709 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 04 15:57:21 crc kubenswrapper[4878]: E1204 15:57:21.505539 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="init" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.505562 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="init" Dec 04 15:57:21 crc kubenswrapper[4878]: E1204 15:57:21.505616 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="dnsmasq-dns" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.505625 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="dnsmasq-dns" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.506288 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d830a5-d873-4309-abfc-5354c3dfe4ef" containerName="dnsmasq-dns" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.507985 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.512114 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-l8qtv" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.514376 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.516699 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.541555 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.658802 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.659042 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3b24340-938a-4130-a002-841b398d49c5-openstack-config\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.659232 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv2c5\" (UniqueName: \"kubernetes.io/projected/c3b24340-938a-4130-a002-841b398d49c5-kube-api-access-lv2c5\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.659345 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.760766 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.760845 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.760947 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3b24340-938a-4130-a002-841b398d49c5-openstack-config\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.760983 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv2c5\" (UniqueName: \"kubernetes.io/projected/c3b24340-938a-4130-a002-841b398d49c5-kube-api-access-lv2c5\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.762395 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c3b24340-938a-4130-a002-841b398d49c5-openstack-config\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.770748 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-openstack-config-secret\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.775043 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3b24340-938a-4130-a002-841b398d49c5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.782499 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv2c5\" (UniqueName: \"kubernetes.io/projected/c3b24340-938a-4130-a002-841b398d49c5-kube-api-access-lv2c5\") pod \"openstackclient\" (UID: \"c3b24340-938a-4130-a002-841b398d49c5\") " pod="openstack/openstackclient" Dec 04 15:57:21 crc kubenswrapper[4878]: I1204 15:57:21.898665 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 04 15:57:22 crc kubenswrapper[4878]: I1204 15:57:22.573827 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 04 15:57:22 crc kubenswrapper[4878]: W1204 15:57:22.577275 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3b24340_938a_4130_a002_841b398d49c5.slice/crio-afa25cae71dcb7d5ce35703740058c533fb2b499c4fd697246133644345742e9 WatchSource:0}: Error finding container afa25cae71dcb7d5ce35703740058c533fb2b499c4fd697246133644345742e9: Status 404 returned error can't find the container with id afa25cae71dcb7d5ce35703740058c533fb2b499c4fd697246133644345742e9 Dec 04 15:57:22 crc kubenswrapper[4878]: I1204 15:57:22.893334 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3b24340-938a-4130-a002-841b398d49c5","Type":"ContainerStarted","Data":"afa25cae71dcb7d5ce35703740058c533fb2b499c4fd697246133644345742e9"} Dec 04 15:57:22 crc kubenswrapper[4878]: I1204 15:57:22.906327 4878 generic.go:334] "Generic (PLEG): container finished" podID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerID="474309a47ad7719ef89c64136273e43aef6422b0ddbd27f22b41904288ee9fa6" exitCode=0 Dec 04 15:57:22 crc kubenswrapper[4878]: I1204 15:57:22.906415 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerDied","Data":"474309a47ad7719ef89c64136273e43aef6422b0ddbd27f22b41904288ee9fa6"} Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.045108 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75f5766fbb-8lpst" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:53284->10.217.0.157:9311: read: connection reset by peer" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.045133 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-75f5766fbb-8lpst" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.157:9311/healthcheck\": read tcp 10.217.0.2:53300->10.217.0.157:9311: read: connection reset by peer" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.267850 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411200 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411252 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411323 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlz74\" (UniqueName: \"kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411441 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411472 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.411592 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data\") pod \"aa160d23-1687-423c-8fdc-a082bfb7482b\" (UID: \"aa160d23-1687-423c-8fdc-a082bfb7482b\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.415304 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.421862 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.424187 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74" (OuterVolumeSpecName: "kube-api-access-jlz74") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "kube-api-access-jlz74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.450128 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts" (OuterVolumeSpecName: "scripts") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.685493 4878 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/aa160d23-1687-423c-8fdc-a082bfb7482b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.685537 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlz74\" (UniqueName: \"kubernetes.io/projected/aa160d23-1687-423c-8fdc-a082bfb7482b-kube-api-access-jlz74\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.685548 4878 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.685557 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.737947 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.788448 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle\") pod \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.788599 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data\") pod \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.788688 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs\") pod \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.788950 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrdfh\" (UniqueName: \"kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh\") pod \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.789008 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom\") pod \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\" (UID: \"a8fb7afa-745f-44f1-816b-8c5b0c9b5073\") " Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.791066 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.794150 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs" (OuterVolumeSpecName: "logs") pod "a8fb7afa-745f-44f1-816b-8c5b0c9b5073" (UID: "a8fb7afa-745f-44f1-816b-8c5b0c9b5073"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.805646 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a8fb7afa-745f-44f1-816b-8c5b0c9b5073" (UID: "a8fb7afa-745f-44f1-816b-8c5b0c9b5073"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.805666 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh" (OuterVolumeSpecName: "kube-api-access-zrdfh") pod "a8fb7afa-745f-44f1-816b-8c5b0c9b5073" (UID: "a8fb7afa-745f-44f1-816b-8c5b0c9b5073"). InnerVolumeSpecName "kube-api-access-zrdfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.846171 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data" (OuterVolumeSpecName: "config-data") pod "aa160d23-1687-423c-8fdc-a082bfb7482b" (UID: "aa160d23-1687-423c-8fdc-a082bfb7482b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.886831 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8fb7afa-745f-44f1-816b-8c5b0c9b5073" (UID: "a8fb7afa-745f-44f1-816b-8c5b0c9b5073"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.892931 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrdfh\" (UniqueName: \"kubernetes.io/projected/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-kube-api-access-zrdfh\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.892968 4878 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.892980 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.892989 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.893009 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa160d23-1687-423c-8fdc-a082bfb7482b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.893021 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.914059 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data" (OuterVolumeSpecName: "config-data") pod "a8fb7afa-745f-44f1-816b-8c5b0c9b5073" (UID: "a8fb7afa-745f-44f1-816b-8c5b0c9b5073"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.952040 4878 generic.go:334] "Generic (PLEG): container finished" podID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerID="80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917" exitCode=0 Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.952124 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerDied","Data":"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917"} Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.952159 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f5766fbb-8lpst" event={"ID":"a8fb7afa-745f-44f1-816b-8c5b0c9b5073","Type":"ContainerDied","Data":"deba2c366a5daa8b8f2fa2de0ab7010cd1f89899cab7322f380cc39bf1d32b85"} Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.952185 4878 scope.go:117] "RemoveContainer" containerID="80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.952354 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f5766fbb-8lpst" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.974682 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"aa160d23-1687-423c-8fdc-a082bfb7482b","Type":"ContainerDied","Data":"12d479aeab18ce168c1b07f3ff5cbde0f563a9ac1f1cfd90c10533b718365464"} Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.974774 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:23 crc kubenswrapper[4878]: I1204 15:57:23.995670 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8fb7afa-745f-44f1-816b-8c5b0c9b5073-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.036265 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.054082 4878 scope.go:117] "RemoveContainer" containerID="a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.056770 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75f5766fbb-8lpst"] Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.083562 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.099423 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.114157 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.114708 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="probe" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.114726 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="probe" Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.114745 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.114753 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api" Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.114765 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="cinder-scheduler" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.114772 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="cinder-scheduler" Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.114791 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api-log" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.114801 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api-log" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.115026 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="probe" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.115044 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" containerName="cinder-scheduler" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.115062 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api-log" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.115079 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" containerName="barbican-api" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.116640 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.121118 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.128857 4878 scope.go:117] "RemoveContainer" containerID="80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917" Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.129665 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917\": container with ID starting with 80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917 not found: ID does not exist" containerID="80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.129982 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917"} err="failed to get container status \"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917\": rpc error: code = NotFound desc = could not find container \"80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917\": container with ID starting with 80f0680dd6d9ee6f0b92705b6c00c8cdb97bbac57c6264ab7e253623c3599917 not found: ID does not exist" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.132073 4878 scope.go:117] "RemoveContainer" containerID="a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7" Dec 04 15:57:24 crc kubenswrapper[4878]: E1204 15:57:24.132891 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7\": container with ID starting with a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7 not found: ID does not exist" containerID="a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.133062 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7"} err="failed to get container status \"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7\": rpc error: code = NotFound desc = could not find container \"a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7\": container with ID starting with a4a56b22eb931bc42461a8e13f5df66bb871c5863f190d56832adc010d685de7 not found: ID does not exist" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.133188 4878 scope.go:117] "RemoveContainer" containerID="afbc07672892aef61704ec56c94bda4b747ea65cbaf29b1ec56332aaaafd42e4" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.145173 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.168594 4878 scope.go:117] "RemoveContainer" containerID="474309a47ad7719ef89c64136273e43aef6422b0ddbd27f22b41904288ee9fa6" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211115 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm8kl\" (UniqueName: \"kubernetes.io/projected/f2640581-49bc-496a-8b18-01d492ff96dc-kube-api-access-bm8kl\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211180 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2640581-49bc-496a-8b18-01d492ff96dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211242 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211282 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211314 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.211357 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.312974 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.313074 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.313159 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.313212 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm8kl\" (UniqueName: \"kubernetes.io/projected/f2640581-49bc-496a-8b18-01d492ff96dc-kube-api-access-bm8kl\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.313304 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2640581-49bc-496a-8b18-01d492ff96dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.313400 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.315918 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2640581-49bc-496a-8b18-01d492ff96dc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.319816 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.319938 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.335464 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.339385 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2640581-49bc-496a-8b18-01d492ff96dc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.350079 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm8kl\" (UniqueName: \"kubernetes.io/projected/f2640581-49bc-496a-8b18-01d492ff96dc-kube-api-access-bm8kl\") pod \"cinder-scheduler-0\" (UID: \"f2640581-49bc-496a-8b18-01d492ff96dc\") " pod="openstack/cinder-scheduler-0" Dec 04 15:57:24 crc kubenswrapper[4878]: I1204 15:57:24.438090 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 04 15:57:25 crc kubenswrapper[4878]: I1204 15:57:25.019075 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 04 15:57:25 crc kubenswrapper[4878]: I1204 15:57:25.209112 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8fb7afa-745f-44f1-816b-8c5b0c9b5073" path="/var/lib/kubelet/pods/a8fb7afa-745f-44f1-816b-8c5b0c9b5073/volumes" Dec 04 15:57:25 crc kubenswrapper[4878]: I1204 15:57:25.211447 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa160d23-1687-423c-8fdc-a082bfb7482b" path="/var/lib/kubelet/pods/aa160d23-1687-423c-8fdc-a082bfb7482b/volumes" Dec 04 15:57:25 crc kubenswrapper[4878]: I1204 15:57:25.923130 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 04 15:57:26 crc kubenswrapper[4878]: I1204 15:57:26.051301 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2640581-49bc-496a-8b18-01d492ff96dc","Type":"ContainerStarted","Data":"9bb92076325233621e03a87dfe2a64f10dcb9f2fb14bbe18c35e701da94cb8c5"} Dec 04 15:57:26 crc kubenswrapper[4878]: I1204 15:57:26.051357 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2640581-49bc-496a-8b18-01d492ff96dc","Type":"ContainerStarted","Data":"48b8bb5809b4cf02e6994748efe97339bff68bfa4017bbd9adf5b1a909f39042"} Dec 04 15:57:27 crc kubenswrapper[4878]: I1204 15:57:27.070715 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2640581-49bc-496a-8b18-01d492ff96dc","Type":"ContainerStarted","Data":"4820b84a76f9372d782af51c2d0f662414b036cf360c4c85f4b438a392b084b5"} Dec 04 15:57:27 crc kubenswrapper[4878]: I1204 15:57:27.095912 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.09588993 podStartE2EDuration="3.09588993s" podCreationTimestamp="2025-12-04 15:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:27.089897669 +0000 UTC m=+1291.052434645" watchObservedRunningTime="2025-12-04 15:57:27.09588993 +0000 UTC m=+1291.058426876" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.068945 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d68fcf6bc-v5rvx"] Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.077327 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.110473 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.111018 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.111185 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.120302 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d68fcf6bc-v5rvx"] Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160106 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-config-data\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160155 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-run-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160214 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-etc-swift\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160277 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-public-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160336 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-combined-ca-bundle\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160366 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-log-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160384 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-internal-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.160406 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcv22\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-kube-api-access-mcv22\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262429 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-public-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262528 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-combined-ca-bundle\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262563 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-log-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262587 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-internal-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262608 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcv22\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-kube-api-access-mcv22\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262694 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-config-data\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262728 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-run-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.262757 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-etc-swift\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.264332 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-log-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.266337 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-run-httpd\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.271398 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-config-data\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.271660 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-public-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.272274 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-internal-tls-certs\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.279147 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-combined-ca-bundle\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.297312 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-etc-swift\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.301333 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcv22\" (UniqueName: \"kubernetes.io/projected/07b8e6cd-af0c-4c2d-97fb-bee728d728a8-kube-api-access-mcv22\") pod \"swift-proxy-d68fcf6bc-v5rvx\" (UID: \"07b8e6cd-af0c-4c2d-97fb-bee728d728a8\") " pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:28 crc kubenswrapper[4878]: I1204 15:57:28.490470 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.251666 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d68fcf6bc-v5rvx"] Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.379753 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.380077 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-central-agent" containerID="cri-o://2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076" gracePeriod=30 Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.381246 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="proxy-httpd" containerID="cri-o://77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d" gracePeriod=30 Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.381313 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="sg-core" containerID="cri-o://974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0" gracePeriod=30 Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.381352 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-notification-agent" containerID="cri-o://382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2" gracePeriod=30 Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.392501 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Dec 04 15:57:29 crc kubenswrapper[4878]: I1204 15:57:29.438977 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.187187 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" event={"ID":"07b8e6cd-af0c-4c2d-97fb-bee728d728a8","Type":"ContainerStarted","Data":"40022f94608fe3fae2ccb45a22566b1c79655d448add55ca02557ccbe317db9b"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.187263 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" event={"ID":"07b8e6cd-af0c-4c2d-97fb-bee728d728a8","Type":"ContainerStarted","Data":"0bb4c907275d49473832715f141c51b72ea749c8297df450c46fd5fe51788588"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.187276 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" event={"ID":"07b8e6cd-af0c-4c2d-97fb-bee728d728a8","Type":"ContainerStarted","Data":"6e0f22cba6e0e30aa43845af24f13a1f126632f43b254542867405628ef97560"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.187314 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191666 4878 generic.go:334] "Generic (PLEG): container finished" podID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerID="77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d" exitCode=0 Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191713 4878 generic.go:334] "Generic (PLEG): container finished" podID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerID="974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0" exitCode=2 Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191730 4878 generic.go:334] "Generic (PLEG): container finished" podID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerID="2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076" exitCode=0 Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191730 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerDied","Data":"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191782 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerDied","Data":"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.191798 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerDied","Data":"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076"} Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.213627 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" podStartSLOduration=3.213602956 podStartE2EDuration="3.213602956s" podCreationTimestamp="2025-12-04 15:57:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:30.212070147 +0000 UTC m=+1294.174607103" watchObservedRunningTime="2025-12-04 15:57:30.213602956 +0000 UTC m=+1294.176139912" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.855205 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.933836 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75xdz\" (UniqueName: \"kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.933934 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.933972 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.934004 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.934066 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.934115 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.934153 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle\") pod \"7c0e5133-6961-440e-902a-ee637e87c2c8\" (UID: \"7c0e5133-6961-440e-902a-ee637e87c2c8\") " Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.935935 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.936290 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.944565 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts" (OuterVolumeSpecName: "scripts") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:30 crc kubenswrapper[4878]: I1204 15:57:30.948193 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz" (OuterVolumeSpecName: "kube-api-access-75xdz") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "kube-api-access-75xdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.003363 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.006703 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.026187 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037525 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75xdz\" (UniqueName: \"kubernetes.io/projected/7c0e5133-6961-440e-902a-ee637e87c2c8-kube-api-access-75xdz\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037581 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037596 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037611 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c0e5133-6961-440e-902a-ee637e87c2c8-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037625 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.037637 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.059004 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data" (OuterVolumeSpecName: "config-data") pod "7c0e5133-6961-440e-902a-ee637e87c2c8" (UID: "7c0e5133-6961-440e-902a-ee637e87c2c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.139602 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c0e5133-6961-440e-902a-ee637e87c2c8-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.222801 4878 generic.go:334] "Generic (PLEG): container finished" podID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerID="382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2" exitCode=0 Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.224391 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.228114 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerDied","Data":"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2"} Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.228286 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c0e5133-6961-440e-902a-ee637e87c2c8","Type":"ContainerDied","Data":"fd975625ad8c55eac7143890c492dab09d7e94230a90f51c5a4ca4ff080e5e34"} Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.228387 4878 scope.go:117] "RemoveContainer" containerID="77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.228850 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.272569 4878 scope.go:117] "RemoveContainer" containerID="974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.273032 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-c5668dccb-gv79r" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.310224 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.521543 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.559569 4878 scope.go:117] "RemoveContainer" containerID="382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.564287 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.564796 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-central-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.564819 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-central-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.564857 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-notification-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.564898 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-notification-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.564915 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="proxy-httpd" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.564921 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="proxy-httpd" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.564954 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="sg-core" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.564959 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="sg-core" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.565160 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="proxy-httpd" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.565179 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-central-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.565190 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="sg-core" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.565201 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" containerName="ceilometer-notification-agent" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.574722 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.579894 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.579910 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.604120 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.617040 4878 scope.go:117] "RemoveContainer" containerID="2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.671145 4878 scope.go:117] "RemoveContainer" containerID="77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.675050 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d\": container with ID starting with 77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d not found: ID does not exist" containerID="77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.675318 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d"} err="failed to get container status \"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d\": rpc error: code = NotFound desc = could not find container \"77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d\": container with ID starting with 77a144418c7392e1b58820cc40af0c1c27de49aa0fb126e2ed1c2a14da92d15d not found: ID does not exist" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.675443 4878 scope.go:117] "RemoveContainer" containerID="974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.676738 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0\": container with ID starting with 974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0 not found: ID does not exist" containerID="974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.676818 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0"} err="failed to get container status \"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0\": rpc error: code = NotFound desc = could not find container \"974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0\": container with ID starting with 974527a913ba94d16c1e57fecbdfc8bd9ad379597974d8dc227605fc03ae6db0 not found: ID does not exist" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.676854 4878 scope.go:117] "RemoveContainer" containerID="382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.681000 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2\": container with ID starting with 382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2 not found: ID does not exist" containerID="382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.681045 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2"} err="failed to get container status \"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2\": rpc error: code = NotFound desc = could not find container \"382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2\": container with ID starting with 382a88baab8b687d7f37eda390849f55c3345d10f654c391735c12c78bddf4f2 not found: ID does not exist" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.681066 4878 scope.go:117] "RemoveContainer" containerID="2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076" Dec 04 15:57:31 crc kubenswrapper[4878]: E1204 15:57:31.682993 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076\": container with ID starting with 2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076 not found: ID does not exist" containerID="2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.683056 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076"} err="failed to get container status \"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076\": rpc error: code = NotFound desc = could not find container \"2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076\": container with ID starting with 2f81c2c8086ffab615f940cc203724a45a28aea8ee0686379e77032d24ce9076 not found: ID does not exist" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.693522 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl5vj\" (UniqueName: \"kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.693986 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.694182 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.694306 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.694428 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.694578 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.694775 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796542 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796660 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl5vj\" (UniqueName: \"kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796704 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796740 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796762 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796782 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.796815 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.797648 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.798984 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.806387 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.807427 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.811769 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.812768 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.826693 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl5vj\" (UniqueName: \"kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj\") pod \"ceilometer-0\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " pod="openstack/ceilometer-0" Dec 04 15:57:31 crc kubenswrapper[4878]: I1204 15:57:31.911037 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.291223 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-hxdhg"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.294639 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.312512 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdhg"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.377743 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-47pmx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.391130 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ac85-account-create-update-8pm6w"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.392087 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.393717 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.407209 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-47pmx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.407902 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.413441 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac85-account-create-update-8pm6w"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.414863 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68x5t\" (UniqueName: \"kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.414992 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.514463 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-09fc-account-create-update-sf4xx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.516128 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.517547 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.517756 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.517923 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68x5t\" (UniqueName: \"kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.518030 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26m8l\" (UniqueName: \"kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.518131 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckhmm\" (UniqueName: \"kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.518216 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.519155 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.519541 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.537973 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nb89k"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.543684 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.553924 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-09fc-account-create-update-sf4xx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.569539 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nb89k"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.572557 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68x5t\" (UniqueName: \"kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t\") pod \"nova-api-db-create-hxdhg\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.623848 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.623999 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtfh\" (UniqueName: \"kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624063 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624096 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624168 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26m8l\" (UniqueName: \"kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624369 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckhmm\" (UniqueName: \"kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624412 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624576 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrkr\" (UniqueName: \"kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.624949 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.626298 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.628434 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.646825 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckhmm\" (UniqueName: \"kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm\") pod \"nova-api-ac85-account-create-update-8pm6w\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.648157 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26m8l\" (UniqueName: \"kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l\") pod \"nova-cell0-db-create-47pmx\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.650483 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.729210 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrkr\" (UniqueName: \"kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.729309 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtfh\" (UniqueName: \"kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.729343 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.729364 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.730370 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.731368 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.752673 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-bea9-account-create-update-8rhwx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.754234 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.754476 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtfh\" (UniqueName: \"kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh\") pod \"nova-cell0-09fc-account-create-update-sf4xx\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.759401 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.760660 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrkr\" (UniqueName: \"kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr\") pod \"nova-cell1-db-create-nb89k\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.763280 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bea9-account-create-update-8rhwx"] Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.765749 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.798505 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.830852 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.831107 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jdx\" (UniqueName: \"kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.842286 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.876075 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.933740 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jdx\" (UniqueName: \"kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.933964 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.934941 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:32 crc kubenswrapper[4878]: I1204 15:57:32.952186 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jdx\" (UniqueName: \"kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx\") pod \"nova-cell1-bea9-account-create-update-8rhwx\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:33 crc kubenswrapper[4878]: I1204 15:57:33.173786 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:33 crc kubenswrapper[4878]: I1204 15:57:33.204179 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0e5133-6961-440e-902a-ee637e87c2c8" path="/var/lib/kubelet/pods/7c0e5133-6961-440e-902a-ee637e87c2c8/volumes" Dec 04 15:57:34 crc kubenswrapper[4878]: I1204 15:57:34.747864 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 04 15:57:35 crc kubenswrapper[4878]: I1204 15:57:35.053210 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:38 crc kubenswrapper[4878]: I1204 15:57:38.498490 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:38 crc kubenswrapper[4878]: I1204 15:57:38.499421 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d68fcf6bc-v5rvx" Dec 04 15:57:38 crc kubenswrapper[4878]: W1204 15:57:38.758753 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbaf9cebe_ff47_435b_b578_59d30678089a.slice/crio-7dd47457ee89a4a8edd577460cddd8f7fac3d1357f6eab7a850149995412cabe WatchSource:0}: Error finding container 7dd47457ee89a4a8edd577460cddd8f7fac3d1357f6eab7a850149995412cabe: Status 404 returned error can't find the container with id 7dd47457ee89a4a8edd577460cddd8f7fac3d1357f6eab7a850149995412cabe Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.372209 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerStarted","Data":"7dd47457ee89a4a8edd577460cddd8f7fac3d1357f6eab7a850149995412cabe"} Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.492859 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-09fc-account-create-update-sf4xx"] Dec 04 15:57:39 crc kubenswrapper[4878]: W1204 15:57:39.563599 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbba4913b_3e30_4b9c_a404_50217b5f1657.slice/crio-c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61 WatchSource:0}: Error finding container c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61: Status 404 returned error can't find the container with id c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61 Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.566467 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nb89k"] Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.761142 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-bea9-account-create-update-8rhwx"] Dec 04 15:57:39 crc kubenswrapper[4878]: W1204 15:57:39.768228 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78be2f68_3d21_4345_8544_3809d5dab436.slice/crio-474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d WatchSource:0}: Error finding container 474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d: Status 404 returned error can't find the container with id 474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d Dec 04 15:57:39 crc kubenswrapper[4878]: W1204 15:57:39.780693 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba562765_65c2_4259_9373_38288bb120e3.slice/crio-2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375 WatchSource:0}: Error finding container 2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375: Status 404 returned error can't find the container with id 2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375 Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.785538 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-47pmx"] Dec 04 15:57:39 crc kubenswrapper[4878]: W1204 15:57:39.790193 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode426e3f3_3d26_4b49_873b_3a442b7de183.slice/crio-c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8 WatchSource:0}: Error finding container c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8: Status 404 returned error can't find the container with id c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8 Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.796224 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ac85-account-create-update-8pm6w"] Dec 04 15:57:39 crc kubenswrapper[4878]: I1204 15:57:39.924975 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-hxdhg"] Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.397454 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac85-account-create-update-8pm6w" event={"ID":"e426e3f3-3d26-4b49-873b-3a442b7de183","Type":"ContainerStarted","Data":"b8dfa64f57f80f7a21636d9ec8ef242095f550143bfd9e914b0b7e404d210130"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.398349 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac85-account-create-update-8pm6w" event={"ID":"e426e3f3-3d26-4b49-873b-3a442b7de183","Type":"ContainerStarted","Data":"c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.406459 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" event={"ID":"78be2f68-3d21-4345-8544-3809d5dab436","Type":"ContainerStarted","Data":"54c2fc6cdefa5750264a0c0dcf964bbbee7b206d89c0af2020725d8e19659586"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.406536 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" event={"ID":"78be2f68-3d21-4345-8544-3809d5dab436","Type":"ContainerStarted","Data":"474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.423750 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"c3b24340-938a-4130-a002-841b398d49c5","Type":"ContainerStarted","Data":"c02c8f77c5afe5d09f8a1802ff78652720c065dca0b043eb2c5dd430fb166891"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.431078 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-ac85-account-create-update-8pm6w" podStartSLOduration=8.431049814 podStartE2EDuration="8.431049814s" podCreationTimestamp="2025-12-04 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:40.419106114 +0000 UTC m=+1304.381643070" watchObservedRunningTime="2025-12-04 15:57:40.431049814 +0000 UTC m=+1304.393586770" Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.439156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-47pmx" event={"ID":"ba562765-65c2-4259-9373-38288bb120e3","Type":"ContainerStarted","Data":"f10916f4fb1f8cfe21025398b248f1f35619a99235e5ade4c17d1381afe7390e"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.439205 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-47pmx" event={"ID":"ba562765-65c2-4259-9373-38288bb120e3","Type":"ContainerStarted","Data":"2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.448647 4878 generic.go:334] "Generic (PLEG): container finished" podID="f0ab7e57-08d0-4697-bbf7-3abe045473b0" containerID="ab6d7187cbe96a4328879803fca06603addbe69aad047dfee5f1f63887dbc884" exitCode=0 Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.448767 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" event={"ID":"f0ab7e57-08d0-4697-bbf7-3abe045473b0","Type":"ContainerDied","Data":"ab6d7187cbe96a4328879803fca06603addbe69aad047dfee5f1f63887dbc884"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.448803 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" event={"ID":"f0ab7e57-08d0-4697-bbf7-3abe045473b0","Type":"ContainerStarted","Data":"44f99cc2285ccc6a272f4dffe78f4717414c20080e2608eaf8012ca620fa1f7a"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.462201 4878 generic.go:334] "Generic (PLEG): container finished" podID="bba4913b-3e30-4b9c-a404-50217b5f1657" containerID="59993031e15363a5e8ff86880ba855bbb07481f07b4691b8444ace0c8e930fcf" exitCode=0 Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.462304 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nb89k" event={"ID":"bba4913b-3e30-4b9c-a404-50217b5f1657","Type":"ContainerDied","Data":"59993031e15363a5e8ff86880ba855bbb07481f07b4691b8444ace0c8e930fcf"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.462337 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nb89k" event={"ID":"bba4913b-3e30-4b9c-a404-50217b5f1657","Type":"ContainerStarted","Data":"c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.466213 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" podStartSLOduration=8.466187227 podStartE2EDuration="8.466187227s" podCreationTimestamp="2025-12-04 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:40.439568378 +0000 UTC m=+1304.402105324" watchObservedRunningTime="2025-12-04 15:57:40.466187227 +0000 UTC m=+1304.428724183" Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.478692 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdhg" event={"ID":"7c679082-d66c-4280-bfab-15d1b6634db9","Type":"ContainerStarted","Data":"84d02a87cb621416dda946cd5ec4fe76854b9e44ee5237dc3d51f348945c9f7f"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.478761 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdhg" event={"ID":"7c679082-d66c-4280-bfab-15d1b6634db9","Type":"ContainerStarted","Data":"5ba5e90824f4ba1df0e0c0d1991982dbb7c02a01ae52d03ec7f1d5e51812c6de"} Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.488586 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-47pmx" podStartSLOduration=8.488554659 podStartE2EDuration="8.488554659s" podCreationTimestamp="2025-12-04 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:40.458080373 +0000 UTC m=+1304.420617349" watchObservedRunningTime="2025-12-04 15:57:40.488554659 +0000 UTC m=+1304.451091615" Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.502302 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.148347997 podStartE2EDuration="19.502276294s" podCreationTimestamp="2025-12-04 15:57:21 +0000 UTC" firstStartedPulling="2025-12-04 15:57:22.579790596 +0000 UTC m=+1286.542327552" lastFinishedPulling="2025-12-04 15:57:38.933718893 +0000 UTC m=+1302.896255849" observedRunningTime="2025-12-04 15:57:40.478069646 +0000 UTC m=+1304.440606612" watchObservedRunningTime="2025-12-04 15:57:40.502276294 +0000 UTC m=+1304.464813250" Dec 04 15:57:40 crc kubenswrapper[4878]: I1204 15:57:40.546516 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-hxdhg" podStartSLOduration=8.546488504 podStartE2EDuration="8.546488504s" podCreationTimestamp="2025-12-04 15:57:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:57:40.54432654 +0000 UTC m=+1304.506863496" watchObservedRunningTime="2025-12-04 15:57:40.546488504 +0000 UTC m=+1304.509025480" Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.007786 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-db576cdd4-fp9zg" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.010294 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.490990 4878 generic.go:334] "Generic (PLEG): container finished" podID="78be2f68-3d21-4345-8544-3809d5dab436" containerID="54c2fc6cdefa5750264a0c0dcf964bbbee7b206d89c0af2020725d8e19659586" exitCode=0 Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.491134 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" event={"ID":"78be2f68-3d21-4345-8544-3809d5dab436","Type":"ContainerDied","Data":"54c2fc6cdefa5750264a0c0dcf964bbbee7b206d89c0af2020725d8e19659586"} Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.494714 4878 generic.go:334] "Generic (PLEG): container finished" podID="ba562765-65c2-4259-9373-38288bb120e3" containerID="f10916f4fb1f8cfe21025398b248f1f35619a99235e5ade4c17d1381afe7390e" exitCode=0 Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.494828 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-47pmx" event={"ID":"ba562765-65c2-4259-9373-38288bb120e3","Type":"ContainerDied","Data":"f10916f4fb1f8cfe21025398b248f1f35619a99235e5ade4c17d1381afe7390e"} Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.498078 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerStarted","Data":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.498670 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerStarted","Data":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.502568 4878 generic.go:334] "Generic (PLEG): container finished" podID="7c679082-d66c-4280-bfab-15d1b6634db9" containerID="84d02a87cb621416dda946cd5ec4fe76854b9e44ee5237dc3d51f348945c9f7f" exitCode=0 Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.502810 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdhg" event={"ID":"7c679082-d66c-4280-bfab-15d1b6634db9","Type":"ContainerDied","Data":"84d02a87cb621416dda946cd5ec4fe76854b9e44ee5237dc3d51f348945c9f7f"} Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.519295 4878 generic.go:334] "Generic (PLEG): container finished" podID="e426e3f3-3d26-4b49-873b-3a442b7de183" containerID="b8dfa64f57f80f7a21636d9ec8ef242095f550143bfd9e914b0b7e404d210130" exitCode=0 Dec 04 15:57:41 crc kubenswrapper[4878]: I1204 15:57:41.519507 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac85-account-create-update-8pm6w" event={"ID":"e426e3f3-3d26-4b49-873b-3a442b7de183","Type":"ContainerDied","Data":"b8dfa64f57f80f7a21636d9ec8ef242095f550143bfd9e914b0b7e404d210130"} Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.109649 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.194420 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qtfh\" (UniqueName: \"kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh\") pod \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.194539 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts\") pod \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\" (UID: \"f0ab7e57-08d0-4697-bbf7-3abe045473b0\") " Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.195227 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0ab7e57-08d0-4697-bbf7-3abe045473b0" (UID: "f0ab7e57-08d0-4697-bbf7-3abe045473b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.203150 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh" (OuterVolumeSpecName: "kube-api-access-4qtfh") pod "f0ab7e57-08d0-4697-bbf7-3abe045473b0" (UID: "f0ab7e57-08d0-4697-bbf7-3abe045473b0"). InnerVolumeSpecName "kube-api-access-4qtfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.224946 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.296081 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzrkr\" (UniqueName: \"kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr\") pod \"bba4913b-3e30-4b9c-a404-50217b5f1657\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.296158 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts\") pod \"bba4913b-3e30-4b9c-a404-50217b5f1657\" (UID: \"bba4913b-3e30-4b9c-a404-50217b5f1657\") " Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.297170 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bba4913b-3e30-4b9c-a404-50217b5f1657" (UID: "bba4913b-3e30-4b9c-a404-50217b5f1657"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.298700 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qtfh\" (UniqueName: \"kubernetes.io/projected/f0ab7e57-08d0-4697-bbf7-3abe045473b0-kube-api-access-4qtfh\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.298787 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0ab7e57-08d0-4697-bbf7-3abe045473b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.298801 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bba4913b-3e30-4b9c-a404-50217b5f1657-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.301892 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr" (OuterVolumeSpecName: "kube-api-access-xzrkr") pod "bba4913b-3e30-4b9c-a404-50217b5f1657" (UID: "bba4913b-3e30-4b9c-a404-50217b5f1657"). InnerVolumeSpecName "kube-api-access-xzrkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.401149 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzrkr\" (UniqueName: \"kubernetes.io/projected/bba4913b-3e30-4b9c-a404-50217b5f1657-kube-api-access-xzrkr\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.531309 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" event={"ID":"f0ab7e57-08d0-4697-bbf7-3abe045473b0","Type":"ContainerDied","Data":"44f99cc2285ccc6a272f4dffe78f4717414c20080e2608eaf8012ca620fa1f7a"} Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.531404 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f99cc2285ccc6a272f4dffe78f4717414c20080e2608eaf8012ca620fa1f7a" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.531338 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-09fc-account-create-update-sf4xx" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.533425 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nb89k" event={"ID":"bba4913b-3e30-4b9c-a404-50217b5f1657","Type":"ContainerDied","Data":"c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61"} Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.533493 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6bbff27b8579eadc4dbca6479d1e4526edfb6f49122d364720aaade12dd6b61" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.533559 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nb89k" Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.539092 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerStarted","Data":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} Dec 04 15:57:42 crc kubenswrapper[4878]: I1204 15:57:42.956430 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.018820 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts\") pod \"7c679082-d66c-4280-bfab-15d1b6634db9\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.019147 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68x5t\" (UniqueName: \"kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t\") pod \"7c679082-d66c-4280-bfab-15d1b6634db9\" (UID: \"7c679082-d66c-4280-bfab-15d1b6634db9\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.021650 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c679082-d66c-4280-bfab-15d1b6634db9" (UID: "7c679082-d66c-4280-bfab-15d1b6634db9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.029323 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t" (OuterVolumeSpecName: "kube-api-access-68x5t") pod "7c679082-d66c-4280-bfab-15d1b6634db9" (UID: "7c679082-d66c-4280-bfab-15d1b6634db9"). InnerVolumeSpecName "kube-api-access-68x5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.122351 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68x5t\" (UniqueName: \"kubernetes.io/projected/7c679082-d66c-4280-bfab-15d1b6634db9-kube-api-access-68x5t\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.122399 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c679082-d66c-4280-bfab-15d1b6634db9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.169774 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.175807 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.215928 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.236968 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts\") pod \"e426e3f3-3d26-4b49-873b-3a442b7de183\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237116 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts\") pod \"78be2f68-3d21-4345-8544-3809d5dab436\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237284 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckhmm\" (UniqueName: \"kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm\") pod \"e426e3f3-3d26-4b49-873b-3a442b7de183\" (UID: \"e426e3f3-3d26-4b49-873b-3a442b7de183\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237450 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jdx\" (UniqueName: \"kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx\") pod \"78be2f68-3d21-4345-8544-3809d5dab436\" (UID: \"78be2f68-3d21-4345-8544-3809d5dab436\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237515 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26m8l\" (UniqueName: \"kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l\") pod \"ba562765-65c2-4259-9373-38288bb120e3\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237538 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts\") pod \"ba562765-65c2-4259-9373-38288bb120e3\" (UID: \"ba562765-65c2-4259-9373-38288bb120e3\") " Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.237832 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e426e3f3-3d26-4b49-873b-3a442b7de183" (UID: "e426e3f3-3d26-4b49-873b-3a442b7de183"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.238478 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e426e3f3-3d26-4b49-873b-3a442b7de183-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.241089 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba562765-65c2-4259-9373-38288bb120e3" (UID: "ba562765-65c2-4259-9373-38288bb120e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.261247 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78be2f68-3d21-4345-8544-3809d5dab436" (UID: "78be2f68-3d21-4345-8544-3809d5dab436"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.279208 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx" (OuterVolumeSpecName: "kube-api-access-p2jdx") pod "78be2f68-3d21-4345-8544-3809d5dab436" (UID: "78be2f68-3d21-4345-8544-3809d5dab436"). InnerVolumeSpecName "kube-api-access-p2jdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.280101 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l" (OuterVolumeSpecName: "kube-api-access-26m8l") pod "ba562765-65c2-4259-9373-38288bb120e3" (UID: "ba562765-65c2-4259-9373-38288bb120e3"). InnerVolumeSpecName "kube-api-access-26m8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.290255 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm" (OuterVolumeSpecName: "kube-api-access-ckhmm") pod "e426e3f3-3d26-4b49-873b-3a442b7de183" (UID: "e426e3f3-3d26-4b49-873b-3a442b7de183"). InnerVolumeSpecName "kube-api-access-ckhmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.340611 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckhmm\" (UniqueName: \"kubernetes.io/projected/e426e3f3-3d26-4b49-873b-3a442b7de183-kube-api-access-ckhmm\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.341139 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jdx\" (UniqueName: \"kubernetes.io/projected/78be2f68-3d21-4345-8544-3809d5dab436-kube-api-access-p2jdx\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.341154 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26m8l\" (UniqueName: \"kubernetes.io/projected/ba562765-65c2-4259-9373-38288bb120e3-kube-api-access-26m8l\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.341166 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba562765-65c2-4259-9373-38288bb120e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.341176 4878 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78be2f68-3d21-4345-8544-3809d5dab436-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.552155 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-hxdhg" event={"ID":"7c679082-d66c-4280-bfab-15d1b6634db9","Type":"ContainerDied","Data":"5ba5e90824f4ba1df0e0c0d1991982dbb7c02a01ae52d03ec7f1d5e51812c6de"} Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.552224 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ba5e90824f4ba1df0e0c0d1991982dbb7c02a01ae52d03ec7f1d5e51812c6de" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.552302 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-hxdhg" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.555589 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ac85-account-create-update-8pm6w" event={"ID":"e426e3f3-3d26-4b49-873b-3a442b7de183","Type":"ContainerDied","Data":"c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8"} Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.555615 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7eb2282557a5f1c8b3153f61bdd532d29f5bb5a4d09638aecdd163d344276a8" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.555658 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ac85-account-create-update-8pm6w" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.559174 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" event={"ID":"78be2f68-3d21-4345-8544-3809d5dab436","Type":"ContainerDied","Data":"474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d"} Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.559229 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="474f622a189f7b071d4579bdc42c0a797511a396a27377c98efd505c612ad44d" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.559309 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-bea9-account-create-update-8rhwx" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.585483 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-47pmx" event={"ID":"ba562765-65c2-4259-9373-38288bb120e3","Type":"ContainerDied","Data":"2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375"} Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.585559 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2226b616ede75cd0b28b6161b1b732c0cc9e22783d334843577d7dd48679e375" Dec 04 15:57:43 crc kubenswrapper[4878]: I1204 15:57:43.585592 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-47pmx" Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.598564 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerStarted","Data":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.598776 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-central-agent" containerID="cri-o://ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" gracePeriod=30 Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.598837 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="sg-core" containerID="cri-o://59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" gracePeriod=30 Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.598895 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-notification-agent" containerID="cri-o://f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" gracePeriod=30 Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.599167 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.599216 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="proxy-httpd" containerID="cri-o://3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" gracePeriod=30 Dec 04 15:57:44 crc kubenswrapper[4878]: I1204 15:57:44.627544 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=8.799130632 podStartE2EDuration="13.627520189s" podCreationTimestamp="2025-12-04 15:57:31 +0000 UTC" firstStartedPulling="2025-12-04 15:57:38.915670149 +0000 UTC m=+1302.878207105" lastFinishedPulling="2025-12-04 15:57:43.744059706 +0000 UTC m=+1307.706596662" observedRunningTime="2025-12-04 15:57:44.625460877 +0000 UTC m=+1308.587997843" watchObservedRunningTime="2025-12-04 15:57:44.627520189 +0000 UTC m=+1308.590057145" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.554217 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.597374 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.597845 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.597966 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.597991 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.598200 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl5vj\" (UniqueName: \"kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.598291 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.598329 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.598393 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data\") pod \"baf9cebe-ff47-435b-b578-59d30678089a\" (UID: \"baf9cebe-ff47-435b-b578-59d30678089a\") " Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.599622 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.604643 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.608851 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts" (OuterVolumeSpecName: "scripts") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.617193 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj" (OuterVolumeSpecName: "kube-api-access-wl5vj") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "kube-api-access-wl5vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625161 4878 generic.go:334] "Generic (PLEG): container finished" podID="baf9cebe-ff47-435b-b578-59d30678089a" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" exitCode=0 Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625205 4878 generic.go:334] "Generic (PLEG): container finished" podID="baf9cebe-ff47-435b-b578-59d30678089a" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" exitCode=2 Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625215 4878 generic.go:334] "Generic (PLEG): container finished" podID="baf9cebe-ff47-435b-b578-59d30678089a" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" exitCode=0 Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625227 4878 generic.go:334] "Generic (PLEG): container finished" podID="baf9cebe-ff47-435b-b578-59d30678089a" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" exitCode=0 Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625256 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerDied","Data":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625304 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerDied","Data":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625320 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerDied","Data":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625331 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerDied","Data":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625344 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"baf9cebe-ff47-435b-b578-59d30678089a","Type":"ContainerDied","Data":"7dd47457ee89a4a8edd577460cddd8f7fac3d1357f6eab7a850149995412cabe"} Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625367 4878 scope.go:117] "RemoveContainer" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.625594 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.656366 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.666338 4878 scope.go:117] "RemoveContainer" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.692129 4878 scope.go:117] "RemoveContainer" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.700744 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/baf9cebe-ff47-435b-b578-59d30678089a-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.701155 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.701222 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl5vj\" (UniqueName: \"kubernetes.io/projected/baf9cebe-ff47-435b-b578-59d30678089a-kube-api-access-wl5vj\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.701764 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.708272 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.732746 4878 scope.go:117] "RemoveContainer" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.746743 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data" (OuterVolumeSpecName: "config-data") pod "baf9cebe-ff47-435b-b578-59d30678089a" (UID: "baf9cebe-ff47-435b-b578-59d30678089a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.762330 4878 scope.go:117] "RemoveContainer" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: E1204 15:57:45.763055 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": container with ID starting with 3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f not found: ID does not exist" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.763097 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} err="failed to get container status \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": rpc error: code = NotFound desc = could not find container \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": container with ID starting with 3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.763124 4878 scope.go:117] "RemoveContainer" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: E1204 15:57:45.763535 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": container with ID starting with 59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d not found: ID does not exist" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.763604 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} err="failed to get container status \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": rpc error: code = NotFound desc = could not find container \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": container with ID starting with 59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.763628 4878 scope.go:117] "RemoveContainer" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: E1204 15:57:45.764179 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": container with ID starting with f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596 not found: ID does not exist" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.764234 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} err="failed to get container status \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": rpc error: code = NotFound desc = could not find container \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": container with ID starting with f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.764271 4878 scope.go:117] "RemoveContainer" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: E1204 15:57:45.764787 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": container with ID starting with ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240 not found: ID does not exist" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.764819 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} err="failed to get container status \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": rpc error: code = NotFound desc = could not find container \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": container with ID starting with ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.764840 4878 scope.go:117] "RemoveContainer" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765106 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} err="failed to get container status \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": rpc error: code = NotFound desc = could not find container \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": container with ID starting with 3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765134 4878 scope.go:117] "RemoveContainer" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765398 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} err="failed to get container status \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": rpc error: code = NotFound desc = could not find container \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": container with ID starting with 59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765423 4878 scope.go:117] "RemoveContainer" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765825 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} err="failed to get container status \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": rpc error: code = NotFound desc = could not find container \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": container with ID starting with f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.765852 4878 scope.go:117] "RemoveContainer" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766169 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} err="failed to get container status \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": rpc error: code = NotFound desc = could not find container \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": container with ID starting with ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766196 4878 scope.go:117] "RemoveContainer" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766450 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} err="failed to get container status \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": rpc error: code = NotFound desc = could not find container \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": container with ID starting with 3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766473 4878 scope.go:117] "RemoveContainer" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766706 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} err="failed to get container status \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": rpc error: code = NotFound desc = could not find container \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": container with ID starting with 59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.766732 4878 scope.go:117] "RemoveContainer" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767029 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} err="failed to get container status \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": rpc error: code = NotFound desc = could not find container \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": container with ID starting with f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767055 4878 scope.go:117] "RemoveContainer" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767551 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} err="failed to get container status \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": rpc error: code = NotFound desc = could not find container \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": container with ID starting with ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767603 4878 scope.go:117] "RemoveContainer" containerID="3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767854 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f"} err="failed to get container status \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": rpc error: code = NotFound desc = could not find container \"3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f\": container with ID starting with 3c6d17ca0bad69123464aff15b6c8216f8e59f307f72ed1082058c7c9b2d1d0f not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.767948 4878 scope.go:117] "RemoveContainer" containerID="59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.768333 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d"} err="failed to get container status \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": rpc error: code = NotFound desc = could not find container \"59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d\": container with ID starting with 59016a1fe24d03ab5d15757b2b8b4a6fc339207d3eba7f3c6dee5d7de7f6ea5d not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.768361 4878 scope.go:117] "RemoveContainer" containerID="f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.768589 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596"} err="failed to get container status \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": rpc error: code = NotFound desc = could not find container \"f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596\": container with ID starting with f6da9893458d5fd92e7984f7f992f508ba8aa29ee064c9786f761548aae8a596 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.768616 4878 scope.go:117] "RemoveContainer" containerID="ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.768919 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240"} err="failed to get container status \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": rpc error: code = NotFound desc = could not find container \"ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240\": container with ID starting with ca89694b265277a6393e2bc853d000c1ae77e8e785cf0a5067260f65ec09f240 not found: ID does not exist" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.804148 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.804194 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/baf9cebe-ff47-435b-b578-59d30678089a-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.966708 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:45 crc kubenswrapper[4878]: I1204 15:57:45.989550 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.003838 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004383 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="sg-core" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004408 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="sg-core" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004428 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c679082-d66c-4280-bfab-15d1b6634db9" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004435 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c679082-d66c-4280-bfab-15d1b6634db9" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004487 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba562765-65c2-4259-9373-38288bb120e3" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004496 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba562765-65c2-4259-9373-38288bb120e3" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004512 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="proxy-httpd" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004519 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="proxy-httpd" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004539 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-central-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004546 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-central-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004562 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-notification-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004569 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-notification-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004581 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0ab7e57-08d0-4697-bbf7-3abe045473b0" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004588 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0ab7e57-08d0-4697-bbf7-3abe045473b0" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004595 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78be2f68-3d21-4345-8544-3809d5dab436" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004602 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="78be2f68-3d21-4345-8544-3809d5dab436" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004625 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e426e3f3-3d26-4b49-873b-3a442b7de183" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004633 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e426e3f3-3d26-4b49-873b-3a442b7de183" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: E1204 15:57:46.004647 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba4913b-3e30-4b9c-a404-50217b5f1657" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004655 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba4913b-3e30-4b9c-a404-50217b5f1657" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004837 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-central-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004853 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba4913b-3e30-4b9c-a404-50217b5f1657" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004885 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c679082-d66c-4280-bfab-15d1b6634db9" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004894 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="ceilometer-notification-agent" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004908 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="78be2f68-3d21-4345-8544-3809d5dab436" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004919 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="proxy-httpd" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004932 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ab7e57-08d0-4697-bbf7-3abe045473b0" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004941 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba562765-65c2-4259-9373-38288bb120e3" containerName="mariadb-database-create" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004950 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="baf9cebe-ff47-435b-b578-59d30678089a" containerName="sg-core" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.004959 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e426e3f3-3d26-4b49-873b-3a442b7de183" containerName="mariadb-account-create-update" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.007457 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.010269 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.010283 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.018235 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111177 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111241 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111383 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111461 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111495 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111539 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.111581 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfxwz\" (UniqueName: \"kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.213365 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.213781 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.213935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.214066 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.214138 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfxwz\" (UniqueName: \"kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.214236 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.214319 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.214692 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.215012 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.222437 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.222570 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.225652 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.238136 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.245266 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfxwz\" (UniqueName: \"kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz\") pod \"ceilometer-0\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.352845 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:57:46 crc kubenswrapper[4878]: I1204 15:57:46.699069 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:46 crc kubenswrapper[4878]: W1204 15:57:46.703088 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod456fd2c2_21fb_43f4_8495_121b28b96f91.slice/crio-4dc222268a2fd88abd0abe2cabba2f8a002bdc75d406aba9d19866cc15188dc7 WatchSource:0}: Error finding container 4dc222268a2fd88abd0abe2cabba2f8a002bdc75d406aba9d19866cc15188dc7: Status 404 returned error can't find the container with id 4dc222268a2fd88abd0abe2cabba2f8a002bdc75d406aba9d19866cc15188dc7 Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.194976 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baf9cebe-ff47-435b-b578-59d30678089a" path="/var/lib/kubelet/pods/baf9cebe-ff47-435b-b578-59d30678089a/volumes" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.358687 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.543729 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86826\" (UniqueName: \"kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.544025 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.544073 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.545739 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.546101 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.546160 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.546227 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle\") pod \"50fc708e-8903-4765-aa76-c2125c0b8d22\" (UID: \"50fc708e-8903-4765-aa76-c2125c0b8d22\") " Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.546955 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs" (OuterVolumeSpecName: "logs") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.547906 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/50fc708e-8903-4765-aa76-c2125c0b8d22-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.551134 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826" (OuterVolumeSpecName: "kube-api-access-86826") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "kube-api-access-86826". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.558015 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.581064 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data" (OuterVolumeSpecName: "config-data") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.589238 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts" (OuterVolumeSpecName: "scripts") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.590911 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.613181 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "50fc708e-8903-4765-aa76-c2125c0b8d22" (UID: "50fc708e-8903-4765-aa76-c2125c0b8d22"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.651943 4878 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.651996 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.652013 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86826\" (UniqueName: \"kubernetes.io/projected/50fc708e-8903-4765-aa76-c2125c0b8d22-kube-api-access-86826\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.652033 4878 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/50fc708e-8903-4765-aa76-c2125c0b8d22-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.652056 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.652073 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50fc708e-8903-4765-aa76-c2125c0b8d22-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.655599 4878 generic.go:334] "Generic (PLEG): container finished" podID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerID="6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0" exitCode=137 Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.655665 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerDied","Data":"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0"} Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.655697 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db576cdd4-fp9zg" event={"ID":"50fc708e-8903-4765-aa76-c2125c0b8d22","Type":"ContainerDied","Data":"6bbd84b8748b8f9ab0e49273c9c6fb9f678344bf78162f3017aa76705be5260a"} Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.655717 4878 scope.go:117] "RemoveContainer" containerID="943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.655866 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db576cdd4-fp9zg" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.666549 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerStarted","Data":"94ff57b4c990c01cedad54a731cab0db62402e522a2cece200008e53380b1f60"} Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.666615 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerStarted","Data":"4dc222268a2fd88abd0abe2cabba2f8a002bdc75d406aba9d19866cc15188dc7"} Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.689530 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.703288 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.717642 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-db576cdd4-fp9zg"] Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.850791 4878 scope.go:117] "RemoveContainer" containerID="6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.887528 4878 scope.go:117] "RemoveContainer" containerID="943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2" Dec 04 15:57:47 crc kubenswrapper[4878]: E1204 15:57:47.915865 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2\": container with ID starting with 943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2 not found: ID does not exist" containerID="943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.915948 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2"} err="failed to get container status \"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2\": rpc error: code = NotFound desc = could not find container \"943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2\": container with ID starting with 943dd2e13a42176ad2dec5aeea53a4b59bb6c17c6682885c6fa5e63fec70a7f2 not found: ID does not exist" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.915986 4878 scope.go:117] "RemoveContainer" containerID="6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0" Dec 04 15:57:47 crc kubenswrapper[4878]: E1204 15:57:47.916800 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0\": container with ID starting with 6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0 not found: ID does not exist" containerID="6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.916828 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0"} err="failed to get container status \"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0\": rpc error: code = NotFound desc = could not find container \"6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0\": container with ID starting with 6d57a47265e5b6ad21373790aad400dd6d71c6d631068f601f4318b4fa51f3d0 not found: ID does not exist" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.985595 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lsw79"] Dec 04 15:57:47 crc kubenswrapper[4878]: E1204 15:57:47.986071 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon-log" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.986087 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon-log" Dec 04 15:57:47 crc kubenswrapper[4878]: E1204 15:57:47.986114 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.986120 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.986324 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.986340 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" containerName="horizon-log" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.987515 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.995568 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.995842 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p5lt8" Dec 04 15:57:47 crc kubenswrapper[4878]: I1204 15:57:47.996021 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.008596 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lsw79"] Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.057881 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.058006 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.058034 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.058070 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z687\" (UniqueName: \"kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.160037 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.160196 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.160298 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.160337 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z687\" (UniqueName: \"kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.166594 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.166897 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.177379 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.188323 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z687\" (UniqueName: \"kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687\") pod \"nova-cell0-conductor-db-sync-lsw79\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.314792 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:57:48 crc kubenswrapper[4878]: I1204 15:57:48.937587 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lsw79"] Dec 04 15:57:49 crc kubenswrapper[4878]: I1204 15:57:49.196387 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50fc708e-8903-4765-aa76-c2125c0b8d22" path="/var/lib/kubelet/pods/50fc708e-8903-4765-aa76-c2125c0b8d22/volumes" Dec 04 15:57:49 crc kubenswrapper[4878]: I1204 15:57:49.715253 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lsw79" event={"ID":"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5","Type":"ContainerStarted","Data":"c5e49a50958ab3f91830eccfc1f950630212b6db322214da4c185c8c6d986840"} Dec 04 15:57:50 crc kubenswrapper[4878]: I1204 15:57:50.740363 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerStarted","Data":"6b3d8afb5ea0a3adef3713cfc6472303cb9292ec194f2bf3bf4d1f040a759016"} Dec 04 15:57:51 crc kubenswrapper[4878]: I1204 15:57:51.880421 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 15:57:52 crc kubenswrapper[4878]: I1204 15:57:52.783181 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerStarted","Data":"fdf49466c3c68c966362331c9df244341caa4aa49edbb75159a38e7de8c0e818"} Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.807310 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerStarted","Data":"d1f44d2efdfd940c4644c09be9242ebd4e540cf5ec24c206d10085e11bb65e7a"} Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.808228 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-central-agent" containerID="cri-o://94ff57b4c990c01cedad54a731cab0db62402e522a2cece200008e53380b1f60" gracePeriod=30 Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.808634 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="proxy-httpd" containerID="cri-o://d1f44d2efdfd940c4644c09be9242ebd4e540cf5ec24c206d10085e11bb65e7a" gracePeriod=30 Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.808660 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-notification-agent" containerID="cri-o://6b3d8afb5ea0a3adef3713cfc6472303cb9292ec194f2bf3bf4d1f040a759016" gracePeriod=30 Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.809065 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.809083 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="sg-core" containerID="cri-o://fdf49466c3c68c966362331c9df244341caa4aa49edbb75159a38e7de8c0e818" gracePeriod=30 Dec 04 15:57:53 crc kubenswrapper[4878]: I1204 15:57:53.849481 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.575188942 podStartE2EDuration="8.84945003s" podCreationTimestamp="2025-12-04 15:57:45 +0000 UTC" firstStartedPulling="2025-12-04 15:57:46.70567088 +0000 UTC m=+1310.668207836" lastFinishedPulling="2025-12-04 15:57:52.979931968 +0000 UTC m=+1316.942468924" observedRunningTime="2025-12-04 15:57:53.840798022 +0000 UTC m=+1317.803334988" watchObservedRunningTime="2025-12-04 15:57:53.84945003 +0000 UTC m=+1317.811986986" Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.830687 4878 generic.go:334] "Generic (PLEG): container finished" podID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerID="d1f44d2efdfd940c4644c09be9242ebd4e540cf5ec24c206d10085e11bb65e7a" exitCode=0 Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.831088 4878 generic.go:334] "Generic (PLEG): container finished" podID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerID="fdf49466c3c68c966362331c9df244341caa4aa49edbb75159a38e7de8c0e818" exitCode=2 Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.831099 4878 generic.go:334] "Generic (PLEG): container finished" podID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerID="6b3d8afb5ea0a3adef3713cfc6472303cb9292ec194f2bf3bf4d1f040a759016" exitCode=0 Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.830799 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerDied","Data":"d1f44d2efdfd940c4644c09be9242ebd4e540cf5ec24c206d10085e11bb65e7a"} Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.831151 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerDied","Data":"fdf49466c3c68c966362331c9df244341caa4aa49edbb75159a38e7de8c0e818"} Dec 04 15:57:54 crc kubenswrapper[4878]: I1204 15:57:54.831174 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerDied","Data":"6b3d8afb5ea0a3adef3713cfc6472303cb9292ec194f2bf3bf4d1f040a759016"} Dec 04 15:58:01 crc kubenswrapper[4878]: I1204 15:58:01.149498 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lsw79" event={"ID":"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5","Type":"ContainerStarted","Data":"f62546df1a806b4affe0ec75a0cd217b50415d73d315dcdc9c72d0d10d3a53ab"} Dec 04 15:58:01 crc kubenswrapper[4878]: I1204 15:58:01.170284 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-lsw79" podStartSLOduration=2.5702349460000002 podStartE2EDuration="14.170257086s" podCreationTimestamp="2025-12-04 15:57:47 +0000 UTC" firstStartedPulling="2025-12-04 15:57:48.935655767 +0000 UTC m=+1312.898192713" lastFinishedPulling="2025-12-04 15:58:00.535677897 +0000 UTC m=+1324.498214853" observedRunningTime="2025-12-04 15:58:01.165405464 +0000 UTC m=+1325.127942420" watchObservedRunningTime="2025-12-04 15:58:01.170257086 +0000 UTC m=+1325.132794042" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.182573 4878 generic.go:334] "Generic (PLEG): container finished" podID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerID="94ff57b4c990c01cedad54a731cab0db62402e522a2cece200008e53380b1f60" exitCode=0 Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.183150 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerDied","Data":"94ff57b4c990c01cedad54a731cab0db62402e522a2cece200008e53380b1f60"} Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.276760 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357680 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357759 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357841 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357898 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357918 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfxwz\" (UniqueName: \"kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.357945 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.358024 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd\") pod \"456fd2c2-21fb-43f4-8495-121b28b96f91\" (UID: \"456fd2c2-21fb-43f4-8495-121b28b96f91\") " Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.358847 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.359364 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.376166 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts" (OuterVolumeSpecName: "scripts") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.376166 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz" (OuterVolumeSpecName: "kube-api-access-wfxwz") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "kube-api-access-wfxwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.409273 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.445139 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461289 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461360 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfxwz\" (UniqueName: \"kubernetes.io/projected/456fd2c2-21fb-43f4-8495-121b28b96f91-kube-api-access-wfxwz\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461374 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461385 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/456fd2c2-21fb-43f4-8495-121b28b96f91-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461394 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.461405 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.469459 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data" (OuterVolumeSpecName: "config-data") pod "456fd2c2-21fb-43f4-8495-121b28b96f91" (UID: "456fd2c2-21fb-43f4-8495-121b28b96f91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:02 crc kubenswrapper[4878]: I1204 15:58:02.563245 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/456fd2c2-21fb-43f4-8495-121b28b96f91-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.198130 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.200753 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"456fd2c2-21fb-43f4-8495-121b28b96f91","Type":"ContainerDied","Data":"4dc222268a2fd88abd0abe2cabba2f8a002bdc75d406aba9d19866cc15188dc7"} Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.201401 4878 scope.go:117] "RemoveContainer" containerID="d1f44d2efdfd940c4644c09be9242ebd4e540cf5ec24c206d10085e11bb65e7a" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.231047 4878 scope.go:117] "RemoveContainer" containerID="fdf49466c3c68c966362331c9df244341caa4aa49edbb75159a38e7de8c0e818" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.247618 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.262375 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.272795 4878 scope.go:117] "RemoveContainer" containerID="6b3d8afb5ea0a3adef3713cfc6472303cb9292ec194f2bf3bf4d1f040a759016" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.292380 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:03 crc kubenswrapper[4878]: E1204 15:58:03.293027 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-central-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293047 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-central-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: E1204 15:58:03.293058 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="proxy-httpd" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293068 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="proxy-httpd" Dec 04 15:58:03 crc kubenswrapper[4878]: E1204 15:58:03.293077 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="sg-core" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293083 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="sg-core" Dec 04 15:58:03 crc kubenswrapper[4878]: E1204 15:58:03.293114 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-notification-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293127 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-notification-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293344 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="proxy-httpd" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293362 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-notification-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293372 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="sg-core" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.293383 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" containerName="ceilometer-central-agent" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.295280 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.303275 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.309986 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.321396 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.322104 4878 scope.go:117] "RemoveContainer" containerID="94ff57b4c990c01cedad54a731cab0db62402e522a2cece200008e53380b1f60" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.381738 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.381827 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.382028 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxpvt\" (UniqueName: \"kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.382220 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.382376 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.382423 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.382612 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.484951 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxpvt\" (UniqueName: \"kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485092 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485136 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485170 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485272 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485321 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.485360 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.487079 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.487608 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.506541 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.508189 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.510746 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxpvt\" (UniqueName: \"kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.511534 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.526262 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts\") pod \"ceilometer-0\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " pod="openstack/ceilometer-0" Dec 04 15:58:03 crc kubenswrapper[4878]: I1204 15:58:03.636925 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:04 crc kubenswrapper[4878]: I1204 15:58:04.170836 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:04 crc kubenswrapper[4878]: I1204 15:58:04.214520 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerStarted","Data":"6f890e43b270a3740cf8dad70c97e6e75314b5b6482f8187ceae1b25b5139218"} Dec 04 15:58:05 crc kubenswrapper[4878]: I1204 15:58:05.193966 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456fd2c2-21fb-43f4-8495-121b28b96f91" path="/var/lib/kubelet/pods/456fd2c2-21fb-43f4-8495-121b28b96f91/volumes" Dec 04 15:58:05 crc kubenswrapper[4878]: I1204 15:58:05.228563 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerStarted","Data":"ae9ce1e462c23c4a6153d1a173184c0d2cf2f116b04be1a48335b239c547a6f7"} Dec 04 15:58:06 crc kubenswrapper[4878]: I1204 15:58:06.245501 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerStarted","Data":"34bca8fc5e24b34cf73b9fb24af7ed94f80abbcccaf50676a82f2b4f1e3f28a3"} Dec 04 15:58:07 crc kubenswrapper[4878]: I1204 15:58:07.266499 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerStarted","Data":"e05bace98c585392596ba073e8e26abfc851c87c344d44830f98877a53a56a2e"} Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.070738 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.071459 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-log" containerID="cri-o://e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04" gracePeriod=30 Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.071570 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-httpd" containerID="cri-o://841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d" gracePeriod=30 Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.285580 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerStarted","Data":"c7f937f9e09854239b8a3f7f6cecb68d49493ac5d982c0886940a2f522415ec7"} Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.285798 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:58:08 crc kubenswrapper[4878]: I1204 15:58:08.374386 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.917451794 podStartE2EDuration="5.374364371s" podCreationTimestamp="2025-12-04 15:58:03 +0000 UTC" firstStartedPulling="2025-12-04 15:58:04.164248005 +0000 UTC m=+1328.126784961" lastFinishedPulling="2025-12-04 15:58:07.621160582 +0000 UTC m=+1331.583697538" observedRunningTime="2025-12-04 15:58:08.363542479 +0000 UTC m=+1332.326079435" watchObservedRunningTime="2025-12-04 15:58:08.374364371 +0000 UTC m=+1332.336901327" Dec 04 15:58:09 crc kubenswrapper[4878]: I1204 15:58:09.326726 4878 generic.go:334] "Generic (PLEG): container finished" podID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerID="e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04" exitCode=143 Dec 04 15:58:09 crc kubenswrapper[4878]: I1204 15:58:09.327108 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerDied","Data":"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04"} Dec 04 15:58:10 crc kubenswrapper[4878]: I1204 15:58:10.870609 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:10 crc kubenswrapper[4878]: I1204 15:58:10.871139 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-log" containerID="cri-o://3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a" gracePeriod=30 Dec 04 15:58:10 crc kubenswrapper[4878]: I1204 15:58:10.871351 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-httpd" containerID="cri-o://bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767" gracePeriod=30 Dec 04 15:58:11 crc kubenswrapper[4878]: I1204 15:58:11.346889 4878 generic.go:334] "Generic (PLEG): container finished" podID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerID="3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a" exitCode=143 Dec 04 15:58:11 crc kubenswrapper[4878]: I1204 15:58:11.346939 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerDied","Data":"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a"} Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.132674 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225252 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225582 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225617 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225700 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225760 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225853 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225892 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225919 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.225982 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle\") pod \"efccf44a-5dad-4080-8b51-208c7dc43e35\" (UID: \"efccf44a-5dad-4080-8b51-208c7dc43e35\") " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.226305 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs" (OuterVolumeSpecName: "logs") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.227178 4878 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.227207 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efccf44a-5dad-4080-8b51-208c7dc43e35-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.236155 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.237441 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts" (OuterVolumeSpecName: "scripts") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.271623 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf" (OuterVolumeSpecName: "kube-api-access-bvqkf") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "kube-api-access-bvqkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.275985 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.328108 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.331860 4878 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.331911 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.331945 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.331955 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvqkf\" (UniqueName: \"kubernetes.io/projected/efccf44a-5dad-4080-8b51-208c7dc43e35-kube-api-access-bvqkf\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.331971 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.359042 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data" (OuterVolumeSpecName: "config-data") pod "efccf44a-5dad-4080-8b51-208c7dc43e35" (UID: "efccf44a-5dad-4080-8b51-208c7dc43e35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.365052 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.375390 4878 generic.go:334] "Generic (PLEG): container finished" podID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerID="841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d" exitCode=0 Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.375455 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerDied","Data":"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d"} Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.375490 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"efccf44a-5dad-4080-8b51-208c7dc43e35","Type":"ContainerDied","Data":"de8c8c38f2bb3020841838cc4903af78a77ba8ba02d13aac66bbc3bc97d22c3f"} Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.375511 4878 scope.go:117] "RemoveContainer" containerID="841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.375660 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.446235 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.446562 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efccf44a-5dad-4080-8b51-208c7dc43e35-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.454696 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.465186 4878 scope.go:117] "RemoveContainer" containerID="e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.482201 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.510190 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:12 crc kubenswrapper[4878]: E1204 15:58:12.510734 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-log" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.510760 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-log" Dec 04 15:58:12 crc kubenswrapper[4878]: E1204 15:58:12.510792 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-httpd" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.510801 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-httpd" Dec 04 15:58:12 crc kubenswrapper[4878]: E1204 15:58:12.512392 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefccf44a_5dad_4080_8b51_208c7dc43e35.slice\": RecentStats: unable to find data in memory cache]" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.548238 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-httpd" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.548318 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" containerName="glance-log" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.548467 4878 scope.go:117] "RemoveContainer" containerID="841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.550562 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: E1204 15:58:12.551117 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d\": container with ID starting with 841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d not found: ID does not exist" containerID="841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.551185 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d"} err="failed to get container status \"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d\": rpc error: code = NotFound desc = could not find container \"841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d\": container with ID starting with 841e754a3ff8b12a539780a48d58e4884194afb42b6c1d774d832278ddda153d not found: ID does not exist" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.551228 4878 scope.go:117] "RemoveContainer" containerID="e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04" Dec 04 15:58:12 crc kubenswrapper[4878]: E1204 15:58:12.551887 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04\": container with ID starting with e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04 not found: ID does not exist" containerID="e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.551928 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04"} err="failed to get container status \"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04\": rpc error: code = NotFound desc = could not find container \"e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04\": container with ID starting with e244f5821d95a13add90e10bd094b2aafc680e469f3352a3e28d12b8cb001b04 not found: ID does not exist" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.562502 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.562733 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.574976 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.746773 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.747567 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="proxy-httpd" containerID="cri-o://c7f937f9e09854239b8a3f7f6cecb68d49493ac5d982c0886940a2f522415ec7" gracePeriod=30 Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.747649 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="sg-core" containerID="cri-o://e05bace98c585392596ba073e8e26abfc851c87c344d44830f98877a53a56a2e" gracePeriod=30 Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.747701 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-notification-agent" containerID="cri-o://34bca8fc5e24b34cf73b9fb24af7ed94f80abbcccaf50676a82f2b4f1e3f28a3" gracePeriod=30 Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.748266 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-central-agent" containerID="cri-o://ae9ce1e462c23c4a6153d1a173184c0d2cf2f116b04be1a48335b239c547a6f7" gracePeriod=30 Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.756579 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758131 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jntlf\" (UniqueName: \"kubernetes.io/projected/d695709a-c328-42c1-8193-20cca3f504bc-kube-api-access-jntlf\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758228 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758338 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758392 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758501 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758580 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.758633 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.863859 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.863969 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jntlf\" (UniqueName: \"kubernetes.io/projected/d695709a-c328-42c1-8193-20cca3f504bc-kube-api-access-jntlf\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.863989 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864013 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864064 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864089 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864139 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864176 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864311 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864768 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-logs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.864783 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d695709a-c328-42c1-8193-20cca3f504bc-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.869546 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.870811 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.870943 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-config-data\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.874338 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d695709a-c328-42c1-8193-20cca3f504bc-scripts\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.885458 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jntlf\" (UniqueName: \"kubernetes.io/projected/d695709a-c328-42c1-8193-20cca3f504bc-kube-api-access-jntlf\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:12 crc kubenswrapper[4878]: I1204 15:58:12.899853 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"d695709a-c328-42c1-8193-20cca3f504bc\") " pod="openstack/glance-default-external-api-0" Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.177042 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.203061 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efccf44a-5dad-4080-8b51-208c7dc43e35" path="/var/lib/kubelet/pods/efccf44a-5dad-4080-8b51-208c7dc43e35/volumes" Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390241 4878 generic.go:334] "Generic (PLEG): container finished" podID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerID="c7f937f9e09854239b8a3f7f6cecb68d49493ac5d982c0886940a2f522415ec7" exitCode=0 Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390575 4878 generic.go:334] "Generic (PLEG): container finished" podID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerID="e05bace98c585392596ba073e8e26abfc851c87c344d44830f98877a53a56a2e" exitCode=2 Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390587 4878 generic.go:334] "Generic (PLEG): container finished" podID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerID="34bca8fc5e24b34cf73b9fb24af7ed94f80abbcccaf50676a82f2b4f1e3f28a3" exitCode=0 Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390302 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerDied","Data":"c7f937f9e09854239b8a3f7f6cecb68d49493ac5d982c0886940a2f522415ec7"} Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390647 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerDied","Data":"e05bace98c585392596ba073e8e26abfc851c87c344d44830f98877a53a56a2e"} Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.390668 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerDied","Data":"34bca8fc5e24b34cf73b9fb24af7ed94f80abbcccaf50676a82f2b4f1e3f28a3"} Dec 04 15:58:13 crc kubenswrapper[4878]: W1204 15:58:13.753026 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd695709a_c328_42c1_8193_20cca3f504bc.slice/crio-c66614ce81da38cdddd2afd578a43c14c2c8f21450219b32b0679f1e701b168a WatchSource:0}: Error finding container c66614ce81da38cdddd2afd578a43c14c2c8f21450219b32b0679f1e701b168a: Status 404 returned error can't find the container with id c66614ce81da38cdddd2afd578a43c14c2c8f21450219b32b0679f1e701b168a Dec 04 15:58:13 crc kubenswrapper[4878]: I1204 15:58:13.754674 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 04 15:58:14 crc kubenswrapper[4878]: I1204 15:58:14.401997 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d695709a-c328-42c1-8193-20cca3f504bc","Type":"ContainerStarted","Data":"c66614ce81da38cdddd2afd578a43c14c2c8f21450219b32b0679f1e701b168a"} Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.313467 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417176 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417279 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417329 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417487 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltk79\" (UniqueName: \"kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417520 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417702 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417796 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.417983 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts\") pod \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\" (UID: \"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.424163 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs" (OuterVolumeSpecName: "logs") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.425220 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79" (OuterVolumeSpecName: "kube-api-access-ltk79") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "kube-api-access-ltk79". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.425265 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.445155 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts" (OuterVolumeSpecName: "scripts") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.478082 4878 generic.go:334] "Generic (PLEG): container finished" podID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerID="bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767" exitCode=0 Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.478283 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.478316 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerDied","Data":"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767"} Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.479633 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"127be8ff-e2ac-4c65-8bff-fe878e5d8eb4","Type":"ContainerDied","Data":"7b14bedb5ffbf1c5a15d93c7a5f9d3ea787e2c916f5907b14774520a7fb018d7"} Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.479791 4878 scope.go:117] "RemoveContainer" containerID="bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.487813 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.501206 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d695709a-c328-42c1-8193-20cca3f504bc","Type":"ContainerStarted","Data":"cb3edfe34ebf9a5fb09096e7ce36a5ee45f3fcf9b82ecd88665f9e36d715e57a"} Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.522919 4878 generic.go:334] "Generic (PLEG): container finished" podID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerID="ae9ce1e462c23c4a6153d1a173184c0d2cf2f116b04be1a48335b239c547a6f7" exitCode=0 Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.523292 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerDied","Data":"ae9ce1e462c23c4a6153d1a173184c0d2cf2f116b04be1a48335b239c547a6f7"} Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.524589 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.524639 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.524650 4878 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.524660 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltk79\" (UniqueName: \"kubernetes.io/projected/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-kube-api-access-ltk79\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.524671 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.557370 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.564006 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data" (OuterVolumeSpecName: "config-data") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.577470 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.583514 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" (UID: "127be8ff-e2ac-4c65-8bff-fe878e5d8eb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.626216 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.626265 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.626281 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.626293 4878 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.683186 4878 scope.go:117] "RemoveContainer" containerID="3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.695293 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.709601 4878 scope.go:117] "RemoveContainer" containerID="bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.710442 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767\": container with ID starting with bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767 not found: ID does not exist" containerID="bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.710587 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767"} err="failed to get container status \"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767\": rpc error: code = NotFound desc = could not find container \"bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767\": container with ID starting with bf524361ebab7d15b0d341d03f84f7b0999a0b324ef873b317c1b5abd0c25767 not found: ID does not exist" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.710773 4878 scope.go:117] "RemoveContainer" containerID="3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.711187 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a\": container with ID starting with 3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a not found: ID does not exist" containerID="3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.711287 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a"} err="failed to get container status \"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a\": rpc error: code = NotFound desc = could not find container \"3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a\": container with ID starting with 3c487c89bf4ce0c11378b169cecf307ebbc740134d596c7b715c6a5e41292b8a not found: ID does not exist" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727061 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxpvt\" (UniqueName: \"kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727163 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727327 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727377 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727454 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727607 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727667 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd\") pod \"365e84d0-4c76-44c5-8b82-26c2eef5547d\" (UID: \"365e84d0-4c76-44c5-8b82-26c2eef5547d\") " Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.727769 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.728388 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.728676 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.747106 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt" (OuterVolumeSpecName: "kube-api-access-hxpvt") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "kube-api-access-hxpvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.748136 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts" (OuterVolumeSpecName: "scripts") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.827165 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.830035 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.830061 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/365e84d0-4c76-44c5-8b82-26c2eef5547d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.830072 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxpvt\" (UniqueName: \"kubernetes.io/projected/365e84d0-4c76-44c5-8b82-26c2eef5547d-kube-api-access-hxpvt\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.830081 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.844593 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.862786 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.872191 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.875640 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876157 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-central-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876177 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-central-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876197 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876203 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876218 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="sg-core" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876226 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="sg-core" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876254 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-log" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876259 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-log" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876274 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-notification-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876280 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-notification-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: E1204 15:58:15.876288 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="proxy-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876294 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="proxy-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876474 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="sg-core" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876494 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-notification-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876507 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876517 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="ceilometer-central-agent" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876530 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" containerName="glance-log" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.876540 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" containerName="proxy-httpd" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.877713 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.880513 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.880923 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.912500 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.913276 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data" (OuterVolumeSpecName: "config-data") pod "365e84d0-4c76-44c5-8b82-26c2eef5547d" (UID: "365e84d0-4c76-44c5-8b82-26c2eef5547d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.931233 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:15 crc kubenswrapper[4878]: I1204 15:58:15.931266 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/365e84d0-4c76-44c5-8b82-26c2eef5547d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.033358 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.033418 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.033443 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-logs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.034057 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqfhj\" (UniqueName: \"kubernetes.io/projected/86a592b1-9417-4993-9470-f6077542c0af-kube-api-access-lqfhj\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.034323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.034423 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.034550 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.034694 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.136858 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.136953 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.136982 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-logs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137043 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqfhj\" (UniqueName: \"kubernetes.io/projected/86a592b1-9417-4993-9470-f6077542c0af-kube-api-access-lqfhj\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137114 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137159 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137207 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137262 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.137777 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.138340 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-logs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.138652 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/86a592b1-9417-4993-9470-f6077542c0af-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.141544 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-scripts\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.142698 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-config-data\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.144252 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.144815 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/86a592b1-9417-4993-9470-f6077542c0af-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.156036 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqfhj\" (UniqueName: \"kubernetes.io/projected/86a592b1-9417-4993-9470-f6077542c0af-kube-api-access-lqfhj\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.178556 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"86a592b1-9417-4993-9470-f6077542c0af\") " pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.206098 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.545455 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"365e84d0-4c76-44c5-8b82-26c2eef5547d","Type":"ContainerDied","Data":"6f890e43b270a3740cf8dad70c97e6e75314b5b6482f8187ceae1b25b5139218"} Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.546024 4878 scope.go:117] "RemoveContainer" containerID="c7f937f9e09854239b8a3f7f6cecb68d49493ac5d982c0886940a2f522415ec7" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.545537 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.563781 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d695709a-c328-42c1-8193-20cca3f504bc","Type":"ContainerStarted","Data":"53611573002a59f250ac8303d7519403c08cf2144cc519f3bac2c46c7d58079d"} Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.580144 4878 scope.go:117] "RemoveContainer" containerID="e05bace98c585392596ba073e8e26abfc851c87c344d44830f98877a53a56a2e" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.616849 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.61682429 podStartE2EDuration="4.61682429s" podCreationTimestamp="2025-12-04 15:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:16.588000825 +0000 UTC m=+1340.550537781" watchObservedRunningTime="2025-12-04 15:58:16.61682429 +0000 UTC m=+1340.579361246" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.654471 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.655931 4878 scope.go:117] "RemoveContainer" containerID="34bca8fc5e24b34cf73b9fb24af7ed94f80abbcccaf50676a82f2b4f1e3f28a3" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.670097 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.691898 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.696564 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.699815 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.700114 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.703370 4878 scope.go:117] "RemoveContainer" containerID="ae9ce1e462c23c4a6153d1a173184c0d2cf2f116b04be1a48335b239c547a6f7" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.708489 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.819882 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.855311 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.855628 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.855813 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zml4w\" (UniqueName: \"kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.856026 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.856139 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.856328 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.856451 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.958891 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.959229 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.959398 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.959609 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.959731 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.959759 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.960058 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zml4w\" (UniqueName: \"kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.960254 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.960608 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.966273 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.966784 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.969927 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.978151 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:16 crc kubenswrapper[4878]: I1204 15:58:16.980712 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zml4w\" (UniqueName: \"kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w\") pod \"ceilometer-0\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " pod="openstack/ceilometer-0" Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.036789 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.215110 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="127be8ff-e2ac-4c65-8bff-fe878e5d8eb4" path="/var/lib/kubelet/pods/127be8ff-e2ac-4c65-8bff-fe878e5d8eb4/volumes" Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.216246 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="365e84d0-4c76-44c5-8b82-26c2eef5547d" path="/var/lib/kubelet/pods/365e84d0-4c76-44c5-8b82-26c2eef5547d/volumes" Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.575376 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86a592b1-9417-4993-9470-f6077542c0af","Type":"ContainerStarted","Data":"933aa819a299c65f1c8ca7808db6e6493426e675c153e2ac37eaea1b26345932"} Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.580115 4878 generic.go:334] "Generic (PLEG): container finished" podID="1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" containerID="f62546df1a806b4affe0ec75a0cd217b50415d73d315dcdc9c72d0d10d3a53ab" exitCode=0 Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.580231 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lsw79" event={"ID":"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5","Type":"ContainerDied","Data":"f62546df1a806b4affe0ec75a0cd217b50415d73d315dcdc9c72d0d10d3a53ab"} Dec 04 15:58:17 crc kubenswrapper[4878]: W1204 15:58:17.665008 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780eaf3f_f788_43f7_adc3_76838212bb53.slice/crio-ea98e5c8153b2bde567376091a2f5a095e39961e43d9cbe6871b09fef306ba7c WatchSource:0}: Error finding container ea98e5c8153b2bde567376091a2f5a095e39961e43d9cbe6871b09fef306ba7c: Status 404 returned error can't find the container with id ea98e5c8153b2bde567376091a2f5a095e39961e43d9cbe6871b09fef306ba7c Dec 04 15:58:17 crc kubenswrapper[4878]: I1204 15:58:17.672839 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.596937 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86a592b1-9417-4993-9470-f6077542c0af","Type":"ContainerStarted","Data":"9c18c6563801c02299d0aa2f241ec9c25b2ea1ecb7aeeea27399f6ed657722af"} Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.600135 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"86a592b1-9417-4993-9470-f6077542c0af","Type":"ContainerStarted","Data":"7e91f505876246317d6bec93c16511b380ea040ad6417359231c524faf86b4b9"} Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.609212 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerStarted","Data":"ea98e5c8153b2bde567376091a2f5a095e39961e43d9cbe6871b09fef306ba7c"} Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.613402 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.635090 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.635062757 podStartE2EDuration="3.635062757s" podCreationTimestamp="2025-12-04 15:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:18.625721762 +0000 UTC m=+1342.588258728" watchObservedRunningTime="2025-12-04 15:58:18.635062757 +0000 UTC m=+1342.597599713" Dec 04 15:58:18 crc kubenswrapper[4878]: I1204 15:58:18.997970 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.105995 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle\") pod \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.106614 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z687\" (UniqueName: \"kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687\") pod \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.106826 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts\") pod \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.107100 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data\") pod \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\" (UID: \"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5\") " Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.112696 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts" (OuterVolumeSpecName: "scripts") pod "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" (UID: "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.119521 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687" (OuterVolumeSpecName: "kube-api-access-6z687") pod "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" (UID: "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5"). InnerVolumeSpecName "kube-api-access-6z687". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.145839 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data" (OuterVolumeSpecName: "config-data") pod "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" (UID: "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.159667 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" (UID: "1d91c912-ec19-4cf2-ade1-d8c9a9df95b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.211097 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z687\" (UniqueName: \"kubernetes.io/projected/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-kube-api-access-6z687\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.211145 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.211161 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.211175 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.625300 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerStarted","Data":"f783aff16e5a23703070600511b7cbb28ea7a09eb583125b636f579b73c6d106"} Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.628228 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-lsw79" event={"ID":"1d91c912-ec19-4cf2-ade1-d8c9a9df95b5","Type":"ContainerDied","Data":"c5e49a50958ab3f91830eccfc1f950630212b6db322214da4c185c8c6d986840"} Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.628295 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e49a50958ab3f91830eccfc1f950630212b6db322214da4c185c8c6d986840" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.628258 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-lsw79" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.731307 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 15:58:19 crc kubenswrapper[4878]: E1204 15:58:19.731913 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" containerName="nova-cell0-conductor-db-sync" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.731935 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" containerName="nova-cell0-conductor-db-sync" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.732153 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" containerName="nova-cell0-conductor-db-sync" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.733094 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.737687 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.742790 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p5lt8" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.752059 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.823399 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wnwh\" (UniqueName: \"kubernetes.io/projected/89cdfafa-37af-4e84-8dbf-a9022767eab6-kube-api-access-5wnwh\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.823596 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.823646 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.925931 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.926012 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.926043 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wnwh\" (UniqueName: \"kubernetes.io/projected/89cdfafa-37af-4e84-8dbf-a9022767eab6-kube-api-access-5wnwh\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.933228 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.933268 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89cdfafa-37af-4e84-8dbf-a9022767eab6-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:19 crc kubenswrapper[4878]: I1204 15:58:19.951551 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wnwh\" (UniqueName: \"kubernetes.io/projected/89cdfafa-37af-4e84-8dbf-a9022767eab6-kube-api-access-5wnwh\") pod \"nova-cell0-conductor-0\" (UID: \"89cdfafa-37af-4e84-8dbf-a9022767eab6\") " pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:20 crc kubenswrapper[4878]: I1204 15:58:20.056301 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:20 crc kubenswrapper[4878]: I1204 15:58:20.532689 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 04 15:58:20 crc kubenswrapper[4878]: I1204 15:58:20.644981 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerStarted","Data":"f0e1ff5ac44068f362f944783904605fbcc4f7a076d68a3d54d48a03c8d4ad3b"} Dec 04 15:58:20 crc kubenswrapper[4878]: I1204 15:58:20.647526 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89cdfafa-37af-4e84-8dbf-a9022767eab6","Type":"ContainerStarted","Data":"a180d17e50713b059562e1f0ba6951894b335be116da437b25f9d3c7956f40b7"} Dec 04 15:58:21 crc kubenswrapper[4878]: I1204 15:58:21.670579 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerStarted","Data":"3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356"} Dec 04 15:58:21 crc kubenswrapper[4878]: I1204 15:58:21.673858 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"89cdfafa-37af-4e84-8dbf-a9022767eab6","Type":"ContainerStarted","Data":"3d850ee0363486c0759c98bf915d4c150253abbb0484c56324779db95a22d9f5"} Dec 04 15:58:21 crc kubenswrapper[4878]: I1204 15:58:21.674071 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:21 crc kubenswrapper[4878]: I1204 15:58:21.695937 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.695916935 podStartE2EDuration="2.695916935s" podCreationTimestamp="2025-12-04 15:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:21.694362016 +0000 UTC m=+1345.656898972" watchObservedRunningTime="2025-12-04 15:58:21.695916935 +0000 UTC m=+1345.658453891" Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.691349 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-central-agent" containerID="cri-o://f783aff16e5a23703070600511b7cbb28ea7a09eb583125b636f579b73c6d106" gracePeriod=30 Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.692238 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerStarted","Data":"629846a595c877f6ee7388ebd970aea3eb52e5b071523b0a8407425846676642"} Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.692287 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.692658 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="proxy-httpd" containerID="cri-o://629846a595c877f6ee7388ebd970aea3eb52e5b071523b0a8407425846676642" gracePeriod=30 Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.692715 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="sg-core" containerID="cri-o://3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356" gracePeriod=30 Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.692756 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-notification-agent" containerID="cri-o://f0e1ff5ac44068f362f944783904605fbcc4f7a076d68a3d54d48a03c8d4ad3b" gracePeriod=30 Dec 04 15:58:22 crc kubenswrapper[4878]: I1204 15:58:22.728655 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.608240013 podStartE2EDuration="6.728628186s" podCreationTimestamp="2025-12-04 15:58:16 +0000 UTC" firstStartedPulling="2025-12-04 15:58:17.668665082 +0000 UTC m=+1341.631202038" lastFinishedPulling="2025-12-04 15:58:21.789053255 +0000 UTC m=+1345.751590211" observedRunningTime="2025-12-04 15:58:22.721989729 +0000 UTC m=+1346.684526695" watchObservedRunningTime="2025-12-04 15:58:22.728628186 +0000 UTC m=+1346.691165142" Dec 04 15:58:22 crc kubenswrapper[4878]: E1204 15:58:22.863295 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod780eaf3f_f788_43f7_adc3_76838212bb53.slice/crio-conmon-3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356.scope\": RecentStats: unable to find data in memory cache]" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.177838 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.177919 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.222788 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.226316 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703353 4878 generic.go:334] "Generic (PLEG): container finished" podID="780eaf3f-f788-43f7-adc3-76838212bb53" containerID="629846a595c877f6ee7388ebd970aea3eb52e5b071523b0a8407425846676642" exitCode=0 Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703387 4878 generic.go:334] "Generic (PLEG): container finished" podID="780eaf3f-f788-43f7-adc3-76838212bb53" containerID="3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356" exitCode=2 Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703394 4878 generic.go:334] "Generic (PLEG): container finished" podID="780eaf3f-f788-43f7-adc3-76838212bb53" containerID="f0e1ff5ac44068f362f944783904605fbcc4f7a076d68a3d54d48a03c8d4ad3b" exitCode=0 Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703445 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerDied","Data":"629846a595c877f6ee7388ebd970aea3eb52e5b071523b0a8407425846676642"} Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703529 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerDied","Data":"3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356"} Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703544 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerDied","Data":"f0e1ff5ac44068f362f944783904605fbcc4f7a076d68a3d54d48a03c8d4ad3b"} Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703709 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 15:58:23 crc kubenswrapper[4878]: I1204 15:58:23.703733 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.085728 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.556465 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-hq8dz"] Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.557739 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.559600 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.560072 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.577717 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hq8dz"] Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.655607 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.655995 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.656020 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jzd2\" (UniqueName: \"kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.656087 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.758444 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.758496 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.758519 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jzd2\" (UniqueName: \"kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.758572 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.766012 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.768632 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.774623 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.786294 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jzd2\" (UniqueName: \"kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2\") pod \"nova-cell0-cell-mapping-hq8dz\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:25 crc kubenswrapper[4878]: I1204 15:58:25.879469 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.041638 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.049797 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.082291 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.167199 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.184701 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6xks\" (UniqueName: \"kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.184789 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.184897 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.184946 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.193862 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.195754 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.199373 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.207467 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.207513 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.255000 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.268436 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.275154 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.291982 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.292148 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6xks\" (UniqueName: \"kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.292180 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.292218 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.296269 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.304811 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.307150 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.310322 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.330650 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.348346 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6xks\" (UniqueName: \"kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks\") pod \"nova-metadata-0\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.371732 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.382740 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.396629 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.396672 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkcrc\" (UniqueName: \"kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.396769 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.396833 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.400607 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.403573 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.418053 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.426420 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.431132 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.472624 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.485507 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.489128 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499234 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499317 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qxl\" (UniqueName: \"kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499343 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499360 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499419 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499452 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.499468 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkcrc\" (UniqueName: \"kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.503316 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.509615 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.521165 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.526009 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkcrc\" (UniqueName: \"kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc\") pod \"nova-api-0\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.553952 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.582412 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.582567 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.588998 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.608740 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.608831 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.608906 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwjf\" (UniqueName: \"kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609620 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609663 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609685 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609715 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54w4d\" (UniqueName: \"kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609764 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qxl\" (UniqueName: \"kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609785 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609899 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.609943 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.615228 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.629729 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.643562 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qxl\" (UniqueName: \"kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl\") pod \"nova-scheduler-0\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.696445 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719186 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719262 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719347 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719377 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nwjf\" (UniqueName: \"kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719428 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719465 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719492 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719514 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.719543 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54w4d\" (UniqueName: \"kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.730393 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.731398 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.755062 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54w4d\" (UniqueName: \"kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.758345 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nwjf\" (UniqueName: \"kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.773203 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.793137 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-hq8dz"] Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.793143 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.810387 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.821668 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.821826 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-8rl4z\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.906291 4878 generic.go:334] "Generic (PLEG): container finished" podID="780eaf3f-f788-43f7-adc3-76838212bb53" containerID="f783aff16e5a23703070600511b7cbb28ea7a09eb583125b636f579b73c6d106" exitCode=0 Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.906939 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerDied","Data":"f783aff16e5a23703070600511b7cbb28ea7a09eb583125b636f579b73c6d106"} Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.908364 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:26 crc kubenswrapper[4878]: I1204 15:58:26.908722 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.053598 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.121211 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.313539 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.384364 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:27 crc kubenswrapper[4878]: W1204 15:58:27.388453 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd51020e_01f4_4ef3_ba9e_28c46575e18e.slice/crio-7c15549fa772342fc64908ff724717fcd9e8683a199a41046969c832000ad1e8 WatchSource:0}: Error finding container 7c15549fa772342fc64908ff724717fcd9e8683a199a41046969c832000ad1e8: Status 404 returned error can't find the container with id 7c15549fa772342fc64908ff724717fcd9e8683a199a41046969c832000ad1e8 Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.551251 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.593842 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j4t5x"] Dec 04 15:58:27 crc kubenswrapper[4878]: E1204 15:58:27.594360 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-notification-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594380 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-notification-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: E1204 15:58:27.594413 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="sg-core" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594419 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="sg-core" Dec 04 15:58:27 crc kubenswrapper[4878]: E1204 15:58:27.594435 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-central-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594443 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-central-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: E1204 15:58:27.594468 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="proxy-httpd" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594474 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="proxy-httpd" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594673 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-notification-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594687 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="sg-core" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594698 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="proxy-httpd" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.594711 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" containerName="ceilometer-central-agent" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.595595 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.598414 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.598786 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.639809 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j4t5x"] Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.664753 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.664885 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.664931 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.664962 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zml4w\" (UniqueName: \"kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665036 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665090 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665118 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd\") pod \"780eaf3f-f788-43f7-adc3-76838212bb53\" (UID: \"780eaf3f-f788-43f7-adc3-76838212bb53\") " Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665405 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml5k9\" (UniqueName: \"kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665520 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.665635 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.670281 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.674486 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.680541 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w" (OuterVolumeSpecName: "kube-api-access-zml4w") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "kube-api-access-zml4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.682969 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts" (OuterVolumeSpecName: "scripts") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.718997 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771215 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771310 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771360 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771383 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml5k9\" (UniqueName: \"kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771446 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771459 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zml4w\" (UniqueName: \"kubernetes.io/projected/780eaf3f-f788-43f7-adc3-76838212bb53-kube-api-access-zml4w\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771470 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771479 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/780eaf3f-f788-43f7-adc3-76838212bb53-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.771493 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.777295 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.785728 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.809115 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.811074 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.818484 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml5k9\" (UniqueName: \"kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9\") pod \"nova-cell1-conductor-db-sync-j4t5x\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.839459 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.851242 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data" (OuterVolumeSpecName: "config-data") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.874687 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.896351 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "780eaf3f-f788-43f7-adc3-76838212bb53" (UID: "780eaf3f-f788-43f7-adc3-76838212bb53"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.923152 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"602f925b-f91f-4f6a-a6b6-5d5dc5c10175","Type":"ContainerStarted","Data":"4e01c1d1243ab3c3fe3ac35f57dab641bd1a3a8f605afd1255789a814c554671"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.926181 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerStarted","Data":"7c15549fa772342fc64908ff724717fcd9e8683a199a41046969c832000ad1e8"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.929435 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hq8dz" event={"ID":"e365e201-9030-4248-a6d6-0c250d3f3251","Type":"ContainerStarted","Data":"79a39fc4ccc7bd031ff1f8533c688719996126ebd4ac86f6d548bdad1754803b"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.929494 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hq8dz" event={"ID":"e365e201-9030-4248-a6d6-0c250d3f3251","Type":"ContainerStarted","Data":"3b623534c0d934a3e698894c8e15c624d863a7459355205304c0c18ffe9603e9"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.936934 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"591bb2b3-d6f8-4c4a-80bd-da2654b821f7","Type":"ContainerStarted","Data":"3f2d3461e732f496d606956583024a82d4fdff82d168c1b12697f9050b3af05c"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.940172 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerStarted","Data":"8f124b6111b6bb3ee13a1e42d4178e14ce47c1e41a5b7e955cea21ec203d0c92"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.941229 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.956075 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.960336 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"780eaf3f-f788-43f7-adc3-76838212bb53","Type":"ContainerDied","Data":"ea98e5c8153b2bde567376091a2f5a095e39961e43d9cbe6871b09fef306ba7c"} Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.960417 4878 scope.go:117] "RemoveContainer" containerID="629846a595c877f6ee7388ebd970aea3eb52e5b071523b0a8407425846676642" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.962682 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-hq8dz" podStartSLOduration=2.962661622 podStartE2EDuration="2.962661622s" podCreationTimestamp="2025-12-04 15:58:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:27.948342253 +0000 UTC m=+1351.910879209" watchObservedRunningTime="2025-12-04 15:58:27.962661622 +0000 UTC m=+1351.925198578" Dec 04 15:58:27 crc kubenswrapper[4878]: I1204 15:58:27.977149 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/780eaf3f-f788-43f7-adc3-76838212bb53-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.114596 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.132483 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.135724 4878 scope.go:117] "RemoveContainer" containerID="3fa5f06edfa5a80b61d069413e9a7d68685064485c5578c676a6e91b2b230356" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.145901 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.202932 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.208975 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.217968 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.218143 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.219472 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.258233 4878 scope.go:117] "RemoveContainer" containerID="f0e1ff5ac44068f362f944783904605fbcc4f7a076d68a3d54d48a03c8d4ad3b" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400141 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prfg2\" (UniqueName: \"kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400234 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400349 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400433 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400462 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.400490 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.401236 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.421095 4878 scope.go:117] "RemoveContainer" containerID="f783aff16e5a23703070600511b7cbb28ea7a09eb583125b636f579b73c6d106" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.505651 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.505821 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prfg2\" (UniqueName: \"kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.505941 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.506163 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.506339 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.506381 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.506402 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.507103 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.512634 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.519756 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.523170 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.524719 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.531772 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.533259 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prfg2\" (UniqueName: \"kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2\") pod \"ceilometer-0\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.572884 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.678855 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j4t5x"] Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.976398 4878 generic.go:334] "Generic (PLEG): container finished" podID="8be76ef1-f903-46e5-a874-641b88528cb6" containerID="7bbbba7e6a411a4c28d87a32d91e39fce277f4ecb5af481919af06a783e3ae95" exitCode=0 Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.976517 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" event={"ID":"8be76ef1-f903-46e5-a874-641b88528cb6","Type":"ContainerDied","Data":"7bbbba7e6a411a4c28d87a32d91e39fce277f4ecb5af481919af06a783e3ae95"} Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.976561 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" event={"ID":"8be76ef1-f903-46e5-a874-641b88528cb6","Type":"ContainerStarted","Data":"07925f279cf77c7c82ebf3efa1d7d163635d198d53b253d570e645077d4b45ea"} Dec 04 15:58:28 crc kubenswrapper[4878]: I1204 15:58:28.991449 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" event={"ID":"2166f9a6-f18a-4637-b089-5c87576d24d5","Type":"ContainerStarted","Data":"0a3a2dba875d7735e69fbe565af7988a02ce2b13e39c15df35a89a0713d71965"} Dec 04 15:58:29 crc kubenswrapper[4878]: I1204 15:58:29.230216 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780eaf3f-f788-43f7-adc3-76838212bb53" path="/var/lib/kubelet/pods/780eaf3f-f788-43f7-adc3-76838212bb53/volumes" Dec 04 15:58:29 crc kubenswrapper[4878]: I1204 15:58:29.247139 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.037274 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" event={"ID":"2166f9a6-f18a-4637-b089-5c87576d24d5","Type":"ContainerStarted","Data":"84046239a736de12ea5542c886205af7998f4efa9ade1db88fedc2431972fde1"} Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.061312 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerStarted","Data":"f097360015310efb9c3269986fac50d0eb09fd926b8a9e5c1480e414624c68b5"} Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.068972 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" podStartSLOduration=3.068939291 podStartE2EDuration="3.068939291s" podCreationTimestamp="2025-12-04 15:58:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:30.055660727 +0000 UTC m=+1354.018197683" watchObservedRunningTime="2025-12-04 15:58:30.068939291 +0000 UTC m=+1354.031476247" Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.069387 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" event={"ID":"8be76ef1-f903-46e5-a874-641b88528cb6","Type":"ContainerStarted","Data":"1b4be42dbfa0d664c473104873bfd90f7f7d030bd760cc57180dd2e9507fa369"} Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.069772 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.101350 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" podStartSLOduration=4.101319414 podStartE2EDuration="4.101319414s" podCreationTimestamp="2025-12-04 15:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:30.09517834 +0000 UTC m=+1354.057715306" watchObservedRunningTime="2025-12-04 15:58:30.101319414 +0000 UTC m=+1354.063856370" Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.330335 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.345182 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.346319 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.346442 4878 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 04 15:58:30 crc kubenswrapper[4878]: I1204 15:58:30.513365 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.147128 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"591bb2b3-d6f8-4c4a-80bd-da2654b821f7","Type":"ContainerStarted","Data":"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.148312 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3" gracePeriod=30 Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.154088 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"602f925b-f91f-4f6a-a6b6-5d5dc5c10175","Type":"ContainerStarted","Data":"d764e67582f60d975d639f56f6bc8fc7eac62ff608c93c92a89080363efaafa3"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.159328 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerStarted","Data":"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.159396 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerStarted","Data":"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.159568 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-log" containerID="cri-o://d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" gracePeriod=30 Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.160039 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-metadata" containerID="cri-o://a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" gracePeriod=30 Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.164834 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerStarted","Data":"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.164913 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerStarted","Data":"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.172077 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerStarted","Data":"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.172149 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerStarted","Data":"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f"} Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.177593 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.088042752 podStartE2EDuration="9.177570389s" podCreationTimestamp="2025-12-04 15:58:26 +0000 UTC" firstStartedPulling="2025-12-04 15:58:27.791030231 +0000 UTC m=+1351.753567197" lastFinishedPulling="2025-12-04 15:58:33.880557878 +0000 UTC m=+1357.843094834" observedRunningTime="2025-12-04 15:58:35.167322671 +0000 UTC m=+1359.129859637" watchObservedRunningTime="2025-12-04 15:58:35.177570389 +0000 UTC m=+1359.140107345" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.240123 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.219403682 podStartE2EDuration="9.240095239s" podCreationTimestamp="2025-12-04 15:58:26 +0000 UTC" firstStartedPulling="2025-12-04 15:58:27.846997657 +0000 UTC m=+1351.809534613" lastFinishedPulling="2025-12-04 15:58:33.867689214 +0000 UTC m=+1357.830226170" observedRunningTime="2025-12-04 15:58:35.196149935 +0000 UTC m=+1359.158686891" watchObservedRunningTime="2025-12-04 15:58:35.240095239 +0000 UTC m=+1359.202632195" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.248178 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.66037362 podStartE2EDuration="9.248149932s" podCreationTimestamp="2025-12-04 15:58:26 +0000 UTC" firstStartedPulling="2025-12-04 15:58:27.298905389 +0000 UTC m=+1351.261442345" lastFinishedPulling="2025-12-04 15:58:33.886681711 +0000 UTC m=+1357.849218657" observedRunningTime="2025-12-04 15:58:35.227597265 +0000 UTC m=+1359.190134221" watchObservedRunningTime="2025-12-04 15:58:35.248149932 +0000 UTC m=+1359.210686888" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.264223 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.790709814 podStartE2EDuration="9.264189685s" podCreationTimestamp="2025-12-04 15:58:26 +0000 UTC" firstStartedPulling="2025-12-04 15:58:27.413066407 +0000 UTC m=+1351.375603373" lastFinishedPulling="2025-12-04 15:58:33.886546288 +0000 UTC m=+1357.849083244" observedRunningTime="2025-12-04 15:58:35.248315276 +0000 UTC m=+1359.210852252" watchObservedRunningTime="2025-12-04 15:58:35.264189685 +0000 UTC m=+1359.226726641" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.827333 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.940364 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle\") pod \"f54d7c14-2041-4319-9d8a-013b0c0e9761\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.940459 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data\") pod \"f54d7c14-2041-4319-9d8a-013b0c0e9761\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.940623 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6xks\" (UniqueName: \"kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks\") pod \"f54d7c14-2041-4319-9d8a-013b0c0e9761\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.940680 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs\") pod \"f54d7c14-2041-4319-9d8a-013b0c0e9761\" (UID: \"f54d7c14-2041-4319-9d8a-013b0c0e9761\") " Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.941128 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs" (OuterVolumeSpecName: "logs") pod "f54d7c14-2041-4319-9d8a-013b0c0e9761" (UID: "f54d7c14-2041-4319-9d8a-013b0c0e9761"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.941846 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f54d7c14-2041-4319-9d8a-013b0c0e9761-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.946360 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks" (OuterVolumeSpecName: "kube-api-access-n6xks") pod "f54d7c14-2041-4319-9d8a-013b0c0e9761" (UID: "f54d7c14-2041-4319-9d8a-013b0c0e9761"). InnerVolumeSpecName "kube-api-access-n6xks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.982122 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data" (OuterVolumeSpecName: "config-data") pod "f54d7c14-2041-4319-9d8a-013b0c0e9761" (UID: "f54d7c14-2041-4319-9d8a-013b0c0e9761"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:35 crc kubenswrapper[4878]: I1204 15:58:35.990185 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f54d7c14-2041-4319-9d8a-013b0c0e9761" (UID: "f54d7c14-2041-4319-9d8a-013b0c0e9761"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.043991 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6xks\" (UniqueName: \"kubernetes.io/projected/f54d7c14-2041-4319-9d8a-013b0c0e9761-kube-api-access-n6xks\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.044049 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.044061 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f54d7c14-2041-4319-9d8a-013b0c0e9761-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188451 4878 generic.go:334] "Generic (PLEG): container finished" podID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerID="a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" exitCode=0 Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188495 4878 generic.go:334] "Generic (PLEG): container finished" podID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerID="d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" exitCode=143 Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188562 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerDied","Data":"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f"} Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188629 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188681 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerDied","Data":"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd"} Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188697 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f54d7c14-2041-4319-9d8a-013b0c0e9761","Type":"ContainerDied","Data":"8f124b6111b6bb3ee13a1e42d4178e14ce47c1e41a5b7e955cea21ec203d0c92"} Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.188721 4878 scope.go:117] "RemoveContainer" containerID="a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.192257 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerStarted","Data":"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b"} Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.214726 4878 scope.go:117] "RemoveContainer" containerID="d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.250636 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.265274 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.265830 4878 scope.go:117] "RemoveContainer" containerID="a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" Dec 04 15:58:36 crc kubenswrapper[4878]: E1204 15:58:36.266557 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f\": container with ID starting with a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f not found: ID does not exist" containerID="a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.266628 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f"} err="failed to get container status \"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f\": rpc error: code = NotFound desc = could not find container \"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f\": container with ID starting with a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f not found: ID does not exist" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.266671 4878 scope.go:117] "RemoveContainer" containerID="d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" Dec 04 15:58:36 crc kubenswrapper[4878]: E1204 15:58:36.267294 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd\": container with ID starting with d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd not found: ID does not exist" containerID="d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.267327 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd"} err="failed to get container status \"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd\": rpc error: code = NotFound desc = could not find container \"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd\": container with ID starting with d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd not found: ID does not exist" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.267348 4878 scope.go:117] "RemoveContainer" containerID="a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.272533 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f"} err="failed to get container status \"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f\": rpc error: code = NotFound desc = could not find container \"a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f\": container with ID starting with a104abcd695526c545f175d6534af49f1d9689aa92dc3168f25c21eac6169c9f not found: ID does not exist" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.272584 4878 scope.go:117] "RemoveContainer" containerID="d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.277144 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd"} err="failed to get container status \"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd\": rpc error: code = NotFound desc = could not find container \"d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd\": container with ID starting with d049177067798db3803f316bfd39f85f2f26a60fd8e1f8f27dc9154974eca4cd not found: ID does not exist" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.281122 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:36 crc kubenswrapper[4878]: E1204 15:58:36.281709 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-log" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.281733 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-log" Dec 04 15:58:36 crc kubenswrapper[4878]: E1204 15:58:36.281764 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-metadata" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.281771 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-metadata" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.281983 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-log" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.282005 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" containerName="nova-metadata-metadata" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.283389 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.290225 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.291634 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.296346 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.353963 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.354053 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.354117 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66wxv\" (UniqueName: \"kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.354200 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.354321 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.463420 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66wxv\" (UniqueName: \"kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.463501 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.463628 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.463707 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.463759 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.471031 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.476769 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.477185 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.482006 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.497624 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66wxv\" (UniqueName: \"kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv\") pod \"nova-metadata-0\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.554624 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.554988 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.604604 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.698040 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.698103 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 15:58:36 crc kubenswrapper[4878]: I1204 15:58:36.771335 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.053969 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.123149 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.159075 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:37 crc kubenswrapper[4878]: W1204 15:58:37.179520 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70f4c70b_4275_483f_93de_88cbea32b13b.slice/crio-3167ec95d5c5e08f23c93eb93d4e153bf7d8140ca7d35bb8606a88423f6c35bb WatchSource:0}: Error finding container 3167ec95d5c5e08f23c93eb93d4e153bf7d8140ca7d35bb8606a88423f6c35bb: Status 404 returned error can't find the container with id 3167ec95d5c5e08f23c93eb93d4e153bf7d8140ca7d35bb8606a88423f6c35bb Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.256832 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54d7c14-2041-4319-9d8a-013b0c0e9761" path="/var/lib/kubelet/pods/f54d7c14-2041-4319-9d8a-013b0c0e9761/volumes" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.260370 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.262826 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="dnsmasq-dns" containerID="cri-o://c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e" gracePeriod=10 Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.359852 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerStarted","Data":"3167ec95d5c5e08f23c93eb93d4e153bf7d8140ca7d35bb8606a88423f6c35bb"} Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.426432 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.640267 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:37 crc kubenswrapper[4878]: I1204 15:58:37.640353 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.184:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.043054 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144389 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144493 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144614 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6clh4\" (UniqueName: \"kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144735 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144797 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.144891 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config\") pod \"032679f9-cf8f-4acf-8aea-37675bdf187d\" (UID: \"032679f9-cf8f-4acf-8aea-37675bdf187d\") " Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.178598 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4" (OuterVolumeSpecName: "kube-api-access-6clh4") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "kube-api-access-6clh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.252775 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6clh4\" (UniqueName: \"kubernetes.io/projected/032679f9-cf8f-4acf-8aea-37675bdf187d-kube-api-access-6clh4\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.294947 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.350661 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config" (OuterVolumeSpecName: "config") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.355634 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.355684 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.386666 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.399295 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerStarted","Data":"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b"} Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.400184 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.405930 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.414133 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerStarted","Data":"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875"} Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.414241 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerStarted","Data":"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9"} Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.419547 4878 generic.go:334] "Generic (PLEG): container finished" podID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerID="c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e" exitCode=0 Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.420842 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.421036 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" event={"ID":"032679f9-cf8f-4acf-8aea-37675bdf187d","Type":"ContainerDied","Data":"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e"} Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.421074 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-5kpzl" event={"ID":"032679f9-cf8f-4acf-8aea-37675bdf187d","Type":"ContainerDied","Data":"bfcd250a858111b36c9bf6e38a92e5ce047813f8ebd257f424daae7ce83cb1a9"} Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.421092 4878 scope.go:117] "RemoveContainer" containerID="c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.444392 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "032679f9-cf8f-4acf-8aea-37675bdf187d" (UID: "032679f9-cf8f-4acf-8aea-37675bdf187d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.449553 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.790294356 podStartE2EDuration="10.449518554s" podCreationTimestamp="2025-12-04 15:58:28 +0000 UTC" firstStartedPulling="2025-12-04 15:58:29.432254337 +0000 UTC m=+1353.394791293" lastFinishedPulling="2025-12-04 15:58:37.091478535 +0000 UTC m=+1361.054015491" observedRunningTime="2025-12-04 15:58:38.434847583 +0000 UTC m=+1362.397384549" watchObservedRunningTime="2025-12-04 15:58:38.449518554 +0000 UTC m=+1362.412055510" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.469845 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.469909 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.469926 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/032679f9-cf8f-4acf-8aea-37675bdf187d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.481457 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.481427841 podStartE2EDuration="2.481427841s" podCreationTimestamp="2025-12-04 15:58:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:38.459687122 +0000 UTC m=+1362.422224078" watchObservedRunningTime="2025-12-04 15:58:38.481427841 +0000 UTC m=+1362.443964787" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.484474 4878 scope.go:117] "RemoveContainer" containerID="006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.537146 4878 scope.go:117] "RemoveContainer" containerID="c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e" Dec 04 15:58:38 crc kubenswrapper[4878]: E1204 15:58:38.538060 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e\": container with ID starting with c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e not found: ID does not exist" containerID="c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.538194 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e"} err="failed to get container status \"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e\": rpc error: code = NotFound desc = could not find container \"c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e\": container with ID starting with c02b781c38717ef32e5fb4c9ef3a4914105d5c4e2406f94576071ab83cb5d97e not found: ID does not exist" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.538327 4878 scope.go:117] "RemoveContainer" containerID="006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63" Dec 04 15:58:38 crc kubenswrapper[4878]: E1204 15:58:38.541463 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63\": container with ID starting with 006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63 not found: ID does not exist" containerID="006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.541524 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63"} err="failed to get container status \"006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63\": rpc error: code = NotFound desc = could not find container \"006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63\": container with ID starting with 006cc7d9da9fd170c8254f6459d7f0fb27dcccd5ffb9195e909c374886934a63 not found: ID does not exist" Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.763553 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:58:38 crc kubenswrapper[4878]: I1204 15:58:38.775320 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-5kpzl"] Dec 04 15:58:39 crc kubenswrapper[4878]: I1204 15:58:39.195454 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" path="/var/lib/kubelet/pods/032679f9-cf8f-4acf-8aea-37675bdf187d/volumes" Dec 04 15:58:39 crc kubenswrapper[4878]: I1204 15:58:39.435012 4878 generic.go:334] "Generic (PLEG): container finished" podID="e365e201-9030-4248-a6d6-0c250d3f3251" containerID="79a39fc4ccc7bd031ff1f8533c688719996126ebd4ac86f6d548bdad1754803b" exitCode=0 Dec 04 15:58:39 crc kubenswrapper[4878]: I1204 15:58:39.435092 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hq8dz" event={"ID":"e365e201-9030-4248-a6d6-0c250d3f3251","Type":"ContainerDied","Data":"79a39fc4ccc7bd031ff1f8533c688719996126ebd4ac86f6d548bdad1754803b"} Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.849623 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.942343 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data\") pod \"e365e201-9030-4248-a6d6-0c250d3f3251\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.942403 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts\") pod \"e365e201-9030-4248-a6d6-0c250d3f3251\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.942651 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle\") pod \"e365e201-9030-4248-a6d6-0c250d3f3251\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.942770 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jzd2\" (UniqueName: \"kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2\") pod \"e365e201-9030-4248-a6d6-0c250d3f3251\" (UID: \"e365e201-9030-4248-a6d6-0c250d3f3251\") " Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.951501 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts" (OuterVolumeSpecName: "scripts") pod "e365e201-9030-4248-a6d6-0c250d3f3251" (UID: "e365e201-9030-4248-a6d6-0c250d3f3251"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.953747 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2" (OuterVolumeSpecName: "kube-api-access-2jzd2") pod "e365e201-9030-4248-a6d6-0c250d3f3251" (UID: "e365e201-9030-4248-a6d6-0c250d3f3251"). InnerVolumeSpecName "kube-api-access-2jzd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.982123 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e365e201-9030-4248-a6d6-0c250d3f3251" (UID: "e365e201-9030-4248-a6d6-0c250d3f3251"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:40 crc kubenswrapper[4878]: I1204 15:58:40.988030 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data" (OuterVolumeSpecName: "config-data") pod "e365e201-9030-4248-a6d6-0c250d3f3251" (UID: "e365e201-9030-4248-a6d6-0c250d3f3251"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.045850 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.045910 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jzd2\" (UniqueName: \"kubernetes.io/projected/e365e201-9030-4248-a6d6-0c250d3f3251-kube-api-access-2jzd2\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.045927 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.045939 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e365e201-9030-4248-a6d6-0c250d3f3251-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.456215 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-hq8dz" event={"ID":"e365e201-9030-4248-a6d6-0c250d3f3251","Type":"ContainerDied","Data":"3b623534c0d934a3e698894c8e15c624d863a7459355205304c0c18ffe9603e9"} Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.456532 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b623534c0d934a3e698894c8e15c624d863a7459355205304c0c18ffe9603e9" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.456270 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-hq8dz" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.604995 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.605361 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.692057 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.692380 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-log" containerID="cri-o://57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f" gracePeriod=30 Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.692590 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-api" containerID="cri-o://584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808" gracePeriod=30 Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.711126 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.712027 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" containerName="nova-scheduler-scheduler" containerID="cri-o://d764e67582f60d975d639f56f6bc8fc7eac62ff608c93c92a89080363efaafa3" gracePeriod=30 Dec 04 15:58:41 crc kubenswrapper[4878]: I1204 15:58:41.732472 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.483257 4878 generic.go:334] "Generic (PLEG): container finished" podID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerID="57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f" exitCode=143 Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.483300 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerDied","Data":"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f"} Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.488903 4878 generic.go:334] "Generic (PLEG): container finished" podID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" containerID="d764e67582f60d975d639f56f6bc8fc7eac62ff608c93c92a89080363efaafa3" exitCode=0 Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.489002 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"602f925b-f91f-4f6a-a6b6-5d5dc5c10175","Type":"ContainerDied","Data":"d764e67582f60d975d639f56f6bc8fc7eac62ff608c93c92a89080363efaafa3"} Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.808637 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.886015 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7qxl\" (UniqueName: \"kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl\") pod \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.887318 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data\") pod \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.887370 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle\") pod \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\" (UID: \"602f925b-f91f-4f6a-a6b6-5d5dc5c10175\") " Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.933063 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl" (OuterVolumeSpecName: "kube-api-access-r7qxl") pod "602f925b-f91f-4f6a-a6b6-5d5dc5c10175" (UID: "602f925b-f91f-4f6a-a6b6-5d5dc5c10175"). InnerVolumeSpecName "kube-api-access-r7qxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:42 crc kubenswrapper[4878]: I1204 15:58:42.945104 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602f925b-f91f-4f6a-a6b6-5d5dc5c10175" (UID: "602f925b-f91f-4f6a-a6b6-5d5dc5c10175"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.005278 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7qxl\" (UniqueName: \"kubernetes.io/projected/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-kube-api-access-r7qxl\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.005471 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.034028 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data" (OuterVolumeSpecName: "config-data") pod "602f925b-f91f-4f6a-a6b6-5d5dc5c10175" (UID: "602f925b-f91f-4f6a-a6b6-5d5dc5c10175"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.107344 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f925b-f91f-4f6a-a6b6-5d5dc5c10175-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:43 crc kubenswrapper[4878]: E1204 15:58:43.439234 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602f925b_f91f_4f6a_a6b6_5d5dc5c10175.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod602f925b_f91f_4f6a_a6b6_5d5dc5c10175.slice/crio-4e01c1d1243ab3c3fe3ac35f57dab641bd1a3a8f605afd1255789a814c554671\": RecentStats: unable to find data in memory cache]" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.514727 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"602f925b-f91f-4f6a-a6b6-5d5dc5c10175","Type":"ContainerDied","Data":"4e01c1d1243ab3c3fe3ac35f57dab641bd1a3a8f605afd1255789a814c554671"} Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.514800 4878 scope.go:117] "RemoveContainer" containerID="d764e67582f60d975d639f56f6bc8fc7eac62ff608c93c92a89080363efaafa3" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.514992 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.520153 4878 generic.go:334] "Generic (PLEG): container finished" podID="2166f9a6-f18a-4637-b089-5c87576d24d5" containerID="84046239a736de12ea5542c886205af7998f4efa9ade1db88fedc2431972fde1" exitCode=0 Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.520666 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-log" containerID="cri-o://16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" gracePeriod=30 Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.520758 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" event={"ID":"2166f9a6-f18a-4637-b089-5c87576d24d5","Type":"ContainerDied","Data":"84046239a736de12ea5542c886205af7998f4efa9ade1db88fedc2431972fde1"} Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.520948 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-metadata" containerID="cri-o://82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" gracePeriod=30 Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.593210 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.605102 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617113 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:43 crc kubenswrapper[4878]: E1204 15:58:43.617598 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="dnsmasq-dns" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617615 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="dnsmasq-dns" Dec 04 15:58:43 crc kubenswrapper[4878]: E1204 15:58:43.617633 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="init" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617638 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="init" Dec 04 15:58:43 crc kubenswrapper[4878]: E1204 15:58:43.617653 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365e201-9030-4248-a6d6-0c250d3f3251" containerName="nova-manage" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617661 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365e201-9030-4248-a6d6-0c250d3f3251" containerName="nova-manage" Dec 04 15:58:43 crc kubenswrapper[4878]: E1204 15:58:43.617682 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" containerName="nova-scheduler-scheduler" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617688 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" containerName="nova-scheduler-scheduler" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617912 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="032679f9-cf8f-4acf-8aea-37675bdf187d" containerName="dnsmasq-dns" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617932 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e365e201-9030-4248-a6d6-0c250d3f3251" containerName="nova-manage" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.617945 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" containerName="nova-scheduler-scheduler" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.618799 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.635418 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.636607 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.719769 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.719838 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp2xg\" (UniqueName: \"kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.719982 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.822301 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.822452 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp2xg\" (UniqueName: \"kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.822514 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.828574 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.828741 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:43 crc kubenswrapper[4878]: I1204 15:58:43.841828 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp2xg\" (UniqueName: \"kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg\") pod \"nova-scheduler-0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " pod="openstack/nova-scheduler-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.085096 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.228581 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.344724 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data\") pod \"70f4c70b-4275-483f-93de-88cbea32b13b\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.344904 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle\") pod \"70f4c70b-4275-483f-93de-88cbea32b13b\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.345036 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs\") pod \"70f4c70b-4275-483f-93de-88cbea32b13b\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.345135 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66wxv\" (UniqueName: \"kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv\") pod \"70f4c70b-4275-483f-93de-88cbea32b13b\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.345173 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs\") pod \"70f4c70b-4275-483f-93de-88cbea32b13b\" (UID: \"70f4c70b-4275-483f-93de-88cbea32b13b\") " Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.351584 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs" (OuterVolumeSpecName: "logs") pod "70f4c70b-4275-483f-93de-88cbea32b13b" (UID: "70f4c70b-4275-483f-93de-88cbea32b13b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.376761 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv" (OuterVolumeSpecName: "kube-api-access-66wxv") pod "70f4c70b-4275-483f-93de-88cbea32b13b" (UID: "70f4c70b-4275-483f-93de-88cbea32b13b"). InnerVolumeSpecName "kube-api-access-66wxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.383355 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70f4c70b-4275-483f-93de-88cbea32b13b" (UID: "70f4c70b-4275-483f-93de-88cbea32b13b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.385050 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data" (OuterVolumeSpecName: "config-data") pod "70f4c70b-4275-483f-93de-88cbea32b13b" (UID: "70f4c70b-4275-483f-93de-88cbea32b13b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.449908 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.450197 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.450207 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66wxv\" (UniqueName: \"kubernetes.io/projected/70f4c70b-4275-483f-93de-88cbea32b13b-kube-api-access-66wxv\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.450219 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70f4c70b-4275-483f-93de-88cbea32b13b-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.458694 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "70f4c70b-4275-483f-93de-88cbea32b13b" (UID: "70f4c70b-4275-483f-93de-88cbea32b13b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.533672 4878 generic.go:334] "Generic (PLEG): container finished" podID="70f4c70b-4275-483f-93de-88cbea32b13b" containerID="82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" exitCode=0 Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.534826 4878 generic.go:334] "Generic (PLEG): container finished" podID="70f4c70b-4275-483f-93de-88cbea32b13b" containerID="16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" exitCode=143 Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.533788 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.533767 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerDied","Data":"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875"} Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.535133 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerDied","Data":"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9"} Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.535154 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70f4c70b-4275-483f-93de-88cbea32b13b","Type":"ContainerDied","Data":"3167ec95d5c5e08f23c93eb93d4e153bf7d8140ca7d35bb8606a88423f6c35bb"} Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.535177 4878 scope.go:117] "RemoveContainer" containerID="82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.552849 4878 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/70f4c70b-4275-483f-93de-88cbea32b13b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.577547 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.602550 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.611651 4878 scope.go:117] "RemoveContainer" containerID="16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.621696 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:44 crc kubenswrapper[4878]: E1204 15:58:44.622365 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-log" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.622451 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-log" Dec 04 15:58:44 crc kubenswrapper[4878]: E1204 15:58:44.622517 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-metadata" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.622576 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-metadata" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.622902 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-log" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.623005 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" containerName="nova-metadata-metadata" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.624201 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.632334 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.632545 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.635449 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.648423 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.676160 4878 scope.go:117] "RemoveContainer" containerID="82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" Dec 04 15:58:44 crc kubenswrapper[4878]: E1204 15:58:44.678024 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875\": container with ID starting with 82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875 not found: ID does not exist" containerID="82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.678091 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875"} err="failed to get container status \"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875\": rpc error: code = NotFound desc = could not find container \"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875\": container with ID starting with 82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875 not found: ID does not exist" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.678147 4878 scope.go:117] "RemoveContainer" containerID="16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" Dec 04 15:58:44 crc kubenswrapper[4878]: E1204 15:58:44.683016 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9\": container with ID starting with 16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9 not found: ID does not exist" containerID="16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.683059 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9"} err="failed to get container status \"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9\": rpc error: code = NotFound desc = could not find container \"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9\": container with ID starting with 16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9 not found: ID does not exist" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.683083 4878 scope.go:117] "RemoveContainer" containerID="82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.683398 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875"} err="failed to get container status \"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875\": rpc error: code = NotFound desc = could not find container \"82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875\": container with ID starting with 82cc4a3ebfa33a1e5318fd181d2339f2d91d0787b6966274495334e9d0111875 not found: ID does not exist" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.683416 4878 scope.go:117] "RemoveContainer" containerID="16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.686990 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9"} err="failed to get container status \"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9\": rpc error: code = NotFound desc = could not find container \"16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9\": container with ID starting with 16ac2d75d1b2737611f9c8368758b4a497428d06715888a3b07105bf49b62ff9 not found: ID does not exist" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.768036 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hx2d\" (UniqueName: \"kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.768121 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.768166 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.768223 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.768242 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.872387 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.872697 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.872814 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hx2d\" (UniqueName: \"kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.872883 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.872926 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.874328 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.879579 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.882472 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.884069 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.893080 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hx2d\" (UniqueName: \"kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d\") pod \"nova-metadata-0\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " pod="openstack/nova-metadata-0" Dec 04 15:58:44 crc kubenswrapper[4878]: I1204 15:58:44.957717 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.107525 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.198770 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602f925b-f91f-4f6a-a6b6-5d5dc5c10175" path="/var/lib/kubelet/pods/602f925b-f91f-4f6a-a6b6-5d5dc5c10175/volumes" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.200463 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f4c70b-4275-483f-93de-88cbea32b13b" path="/var/lib/kubelet/pods/70f4c70b-4275-483f-93de-88cbea32b13b/volumes" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.284569 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data\") pod \"2166f9a6-f18a-4637-b089-5c87576d24d5\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.284631 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle\") pod \"2166f9a6-f18a-4637-b089-5c87576d24d5\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.284758 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts\") pod \"2166f9a6-f18a-4637-b089-5c87576d24d5\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.284842 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml5k9\" (UniqueName: \"kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9\") pod \"2166f9a6-f18a-4637-b089-5c87576d24d5\" (UID: \"2166f9a6-f18a-4637-b089-5c87576d24d5\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.290361 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts" (OuterVolumeSpecName: "scripts") pod "2166f9a6-f18a-4637-b089-5c87576d24d5" (UID: "2166f9a6-f18a-4637-b089-5c87576d24d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.293864 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9" (OuterVolumeSpecName: "kube-api-access-ml5k9") pod "2166f9a6-f18a-4637-b089-5c87576d24d5" (UID: "2166f9a6-f18a-4637-b089-5c87576d24d5"). InnerVolumeSpecName "kube-api-access-ml5k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.319423 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2166f9a6-f18a-4637-b089-5c87576d24d5" (UID: "2166f9a6-f18a-4637-b089-5c87576d24d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.321038 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data" (OuterVolumeSpecName: "config-data") pod "2166f9a6-f18a-4637-b089-5c87576d24d5" (UID: "2166f9a6-f18a-4637-b089-5c87576d24d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.364844 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.387050 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.387088 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.387103 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2166f9a6-f18a-4637-b089-5c87576d24d5-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.387115 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml5k9\" (UniqueName: \"kubernetes.io/projected/2166f9a6-f18a-4637-b089-5c87576d24d5-kube-api-access-ml5k9\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.481856 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: W1204 15:58:45.486785 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc09e2e56_98f5_4ed7_8214_5d86c7e7ac3b.slice/crio-234d75e4bffa7469f4c33e0d391c72c57855987341893a6007d24e1ff3a05421 WatchSource:0}: Error finding container 234d75e4bffa7469f4c33e0d391c72c57855987341893a6007d24e1ff3a05421: Status 404 returned error can't find the container with id 234d75e4bffa7469f4c33e0d391c72c57855987341893a6007d24e1ff3a05421 Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.488104 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs\") pod \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.488181 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data\") pod \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.488402 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle\") pod \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.488541 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkcrc\" (UniqueName: \"kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc\") pod \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\" (UID: \"fd51020e-01f4-4ef3-ba9e-28c46575e18e\") " Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.488748 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs" (OuterVolumeSpecName: "logs") pod "fd51020e-01f4-4ef3-ba9e-28c46575e18e" (UID: "fd51020e-01f4-4ef3-ba9e-28c46575e18e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.489608 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd51020e-01f4-4ef3-ba9e-28c46575e18e-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.492637 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc" (OuterVolumeSpecName: "kube-api-access-wkcrc") pod "fd51020e-01f4-4ef3-ba9e-28c46575e18e" (UID: "fd51020e-01f4-4ef3-ba9e-28c46575e18e"). InnerVolumeSpecName "kube-api-access-wkcrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.520941 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd51020e-01f4-4ef3-ba9e-28c46575e18e" (UID: "fd51020e-01f4-4ef3-ba9e-28c46575e18e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.524926 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data" (OuterVolumeSpecName: "config-data") pod "fd51020e-01f4-4ef3-ba9e-28c46575e18e" (UID: "fd51020e-01f4-4ef3-ba9e-28c46575e18e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.550910 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"170a9b65-ba8b-46ac-905b-610256381bb0","Type":"ContainerStarted","Data":"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.550968 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"170a9b65-ba8b-46ac-905b-610256381bb0","Type":"ContainerStarted","Data":"15a0b793d6cfc601794f1cb2d3a323b197b357973dd055829e746912f6cc9af6"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.553071 4878 generic.go:334] "Generic (PLEG): container finished" podID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerID="584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808" exitCode=0 Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.553150 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerDied","Data":"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.553221 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fd51020e-01f4-4ef3-ba9e-28c46575e18e","Type":"ContainerDied","Data":"7c15549fa772342fc64908ff724717fcd9e8683a199a41046969c832000ad1e8"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.553246 4878 scope.go:117] "RemoveContainer" containerID="584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.553265 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.561643 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerStarted","Data":"234d75e4bffa7469f4c33e0d391c72c57855987341893a6007d24e1ff3a05421"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.568225 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" event={"ID":"2166f9a6-f18a-4637-b089-5c87576d24d5","Type":"ContainerDied","Data":"0a3a2dba875d7735e69fbe565af7988a02ce2b13e39c15df35a89a0713d71965"} Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.568281 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a3a2dba875d7735e69fbe565af7988a02ce2b13e39c15df35a89a0713d71965" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.568364 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-j4t5x" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.580708 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.580682016 podStartE2EDuration="2.580682016s" podCreationTimestamp="2025-12-04 15:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:45.568261792 +0000 UTC m=+1369.530798768" watchObservedRunningTime="2025-12-04 15:58:45.580682016 +0000 UTC m=+1369.543218972" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.589539 4878 scope.go:117] "RemoveContainer" containerID="57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.597477 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.597516 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd51020e-01f4-4ef3-ba9e-28c46575e18e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.597529 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkcrc\" (UniqueName: \"kubernetes.io/projected/fd51020e-01f4-4ef3-ba9e-28c46575e18e-kube-api-access-wkcrc\") on node \"crc\" DevicePath \"\"" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.614408 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.644390 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.644728 4878 scope.go:117] "RemoveContainer" containerID="584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808" Dec 04 15:58:45 crc kubenswrapper[4878]: E1204 15:58:45.645318 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808\": container with ID starting with 584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808 not found: ID does not exist" containerID="584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.645351 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808"} err="failed to get container status \"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808\": rpc error: code = NotFound desc = could not find container \"584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808\": container with ID starting with 584c04607aafa3da5228db78e75edccf705a42da40617f034048da0e846d0808 not found: ID does not exist" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.645378 4878 scope.go:117] "RemoveContainer" containerID="57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f" Dec 04 15:58:45 crc kubenswrapper[4878]: E1204 15:58:45.645820 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f\": container with ID starting with 57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f not found: ID does not exist" containerID="57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.645887 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f"} err="failed to get container status \"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f\": rpc error: code = NotFound desc = could not find container \"57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f\": container with ID starting with 57fe2b99a37c25a68b3110902dab11306bbd589c9708407f5a93a1353a89e24f not found: ID does not exist" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.676970 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: E1204 15:58:45.677562 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-log" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677588 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-log" Dec 04 15:58:45 crc kubenswrapper[4878]: E1204 15:58:45.677607 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-api" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677617 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-api" Dec 04 15:58:45 crc kubenswrapper[4878]: E1204 15:58:45.677649 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2166f9a6-f18a-4637-b089-5c87576d24d5" containerName="nova-cell1-conductor-db-sync" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677655 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="2166f9a6-f18a-4637-b089-5c87576d24d5" containerName="nova-cell1-conductor-db-sync" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677858 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-log" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677899 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" containerName="nova-api-api" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.677923 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="2166f9a6-f18a-4637-b089-5c87576d24d5" containerName="nova-cell1-conductor-db-sync" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.679289 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.683493 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.708811 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.730781 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.734360 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.738324 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.783954 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.801345 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.801401 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.801460 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj7cl\" (UniqueName: \"kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.801522 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903139 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllh5\" (UniqueName: \"kubernetes.io/projected/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-kube-api-access-jllh5\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903409 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903452 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903476 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903550 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903604 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj7cl\" (UniqueName: \"kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903782 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.903858 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.910178 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.910246 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:45 crc kubenswrapper[4878]: I1204 15:58:45.924391 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj7cl\" (UniqueName: \"kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl\") pod \"nova-api-0\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " pod="openstack/nova-api-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.005537 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.005592 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.005678 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jllh5\" (UniqueName: \"kubernetes.io/projected/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-kube-api-access-jllh5\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.019000 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.019771 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.021776 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jllh5\" (UniqueName: \"kubernetes.io/projected/2fe85dc9-a6b6-40ec-90fd-fd5fab214c24-kube-api-access-jllh5\") pod \"nova-cell1-conductor-0\" (UID: \"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24\") " pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.046716 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.093346 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.563368 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:58:46 crc kubenswrapper[4878]: W1204 15:58:46.567449 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0969df37_1474_46f7_b310_1611b12d6396.slice/crio-0cb70926d840638e67b26127402a7ed79bfffb8510918b4e79b8e949a52afd89 WatchSource:0}: Error finding container 0cb70926d840638e67b26127402a7ed79bfffb8510918b4e79b8e949a52afd89: Status 404 returned error can't find the container with id 0cb70926d840638e67b26127402a7ed79bfffb8510918b4e79b8e949a52afd89 Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.588017 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerStarted","Data":"ac67d8995b476ce906e3a896cb0630c28dbbeb5587ae362bbf75156d8d062247"} Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.588072 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerStarted","Data":"1b0458df873454ec88f35bb0fceecaf9343b892b09979efb1ea96826fe8decc8"} Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.591953 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerStarted","Data":"0cb70926d840638e67b26127402a7ed79bfffb8510918b4e79b8e949a52afd89"} Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.632741 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.632715885 podStartE2EDuration="2.632715885s" podCreationTimestamp="2025-12-04 15:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:46.63054054 +0000 UTC m=+1370.593077496" watchObservedRunningTime="2025-12-04 15:58:46.632715885 +0000 UTC m=+1370.595252861" Dec 04 15:58:46 crc kubenswrapper[4878]: I1204 15:58:46.672069 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 04 15:58:46 crc kubenswrapper[4878]: W1204 15:58:46.675363 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fe85dc9_a6b6_40ec_90fd_fd5fab214c24.slice/crio-9cb33f295701a9edfce915de8e78002944457aeb9d7498a61c2ecc55ced43256 WatchSource:0}: Error finding container 9cb33f295701a9edfce915de8e78002944457aeb9d7498a61c2ecc55ced43256: Status 404 returned error can't find the container with id 9cb33f295701a9edfce915de8e78002944457aeb9d7498a61c2ecc55ced43256 Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.193115 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd51020e-01f4-4ef3-ba9e-28c46575e18e" path="/var/lib/kubelet/pods/fd51020e-01f4-4ef3-ba9e-28c46575e18e/volumes" Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.604519 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24","Type":"ContainerStarted","Data":"4a7ba03b65f676a4c6e1796e75f7b0e1b4f0ad0d9b8f8b33e1ab6984fb46b11c"} Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.606730 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.606863 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"2fe85dc9-a6b6-40ec-90fd-fd5fab214c24","Type":"ContainerStarted","Data":"9cb33f295701a9edfce915de8e78002944457aeb9d7498a61c2ecc55ced43256"} Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.616452 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerStarted","Data":"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9"} Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.616503 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerStarted","Data":"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2"} Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.635675 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.635650602 podStartE2EDuration="2.635650602s" podCreationTimestamp="2025-12-04 15:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:47.623035213 +0000 UTC m=+1371.585572169" watchObservedRunningTime="2025-12-04 15:58:47.635650602 +0000 UTC m=+1371.598187558" Dec 04 15:58:47 crc kubenswrapper[4878]: I1204 15:58:47.643129 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.643102211 podStartE2EDuration="2.643102211s" podCreationTimestamp="2025-12-04 15:58:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:58:47.64229784 +0000 UTC m=+1371.604834806" watchObservedRunningTime="2025-12-04 15:58:47.643102211 +0000 UTC m=+1371.605639167" Dec 04 15:58:49 crc kubenswrapper[4878]: I1204 15:58:49.086207 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 15:58:49 crc kubenswrapper[4878]: I1204 15:58:49.957980 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:58:49 crc kubenswrapper[4878]: I1204 15:58:49.958036 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:58:51 crc kubenswrapper[4878]: I1204 15:58:51.122095 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 04 15:58:54 crc kubenswrapper[4878]: I1204 15:58:54.086093 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 15:58:54 crc kubenswrapper[4878]: I1204 15:58:54.114304 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 15:58:54 crc kubenswrapper[4878]: I1204 15:58:54.728754 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 15:58:54 crc kubenswrapper[4878]: I1204 15:58:54.958463 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 15:58:54 crc kubenswrapper[4878]: I1204 15:58:54.958798 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 15:58:55 crc kubenswrapper[4878]: I1204 15:58:55.975091 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:55 crc kubenswrapper[4878]: I1204 15:58:55.975102 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:56 crc kubenswrapper[4878]: I1204 15:58:56.049412 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:58:56 crc kubenswrapper[4878]: I1204 15:58:56.049482 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:58:57 crc kubenswrapper[4878]: I1204 15:58:57.132009 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:57 crc kubenswrapper[4878]: I1204 15:58:57.132355 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:58:58 crc kubenswrapper[4878]: I1204 15:58:58.578187 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 04 15:59:02 crc kubenswrapper[4878]: I1204 15:59:02.772047 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:02 crc kubenswrapper[4878]: I1204 15:59:02.775049 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" containerName="kube-state-metrics" containerID="cri-o://981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa" gracePeriod=30 Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.272596 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.413159 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzw8k\" (UniqueName: \"kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k\") pod \"c38cb4c2-8301-4304-a2a8-beed07ff5c49\" (UID: \"c38cb4c2-8301-4304-a2a8-beed07ff5c49\") " Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.424475 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k" (OuterVolumeSpecName: "kube-api-access-rzw8k") pod "c38cb4c2-8301-4304-a2a8-beed07ff5c49" (UID: "c38cb4c2-8301-4304-a2a8-beed07ff5c49"). InnerVolumeSpecName "kube-api-access-rzw8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.516771 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzw8k\" (UniqueName: \"kubernetes.io/projected/c38cb4c2-8301-4304-a2a8-beed07ff5c49-kube-api-access-rzw8k\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.790963 4878 generic.go:334] "Generic (PLEG): container finished" podID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" containerID="981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa" exitCode=2 Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.791001 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.791047 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38cb4c2-8301-4304-a2a8-beed07ff5c49","Type":"ContainerDied","Data":"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa"} Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.791111 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c38cb4c2-8301-4304-a2a8-beed07ff5c49","Type":"ContainerDied","Data":"7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e"} Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.791135 4878 scope.go:117] "RemoveContainer" containerID="981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.831083 4878 scope.go:117] "RemoveContainer" containerID="981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa" Dec 04 15:59:03 crc kubenswrapper[4878]: E1204 15:59:03.833976 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa\": container with ID starting with 981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa not found: ID does not exist" containerID="981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.834039 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa"} err="failed to get container status \"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa\": rpc error: code = NotFound desc = could not find container \"981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa\": container with ID starting with 981d86fd09b92bfce6a10a0b2d06a021961e49577c4562a73d98b511be522dfa not found: ID does not exist" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.870229 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.905238 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.924901 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:03 crc kubenswrapper[4878]: E1204 15:59:03.925957 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" containerName="kube-state-metrics" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.925983 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" containerName="kube-state-metrics" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.926224 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" containerName="kube-state-metrics" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.927134 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.931915 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.932145 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 04 15:59:03 crc kubenswrapper[4878]: I1204 15:59:03.937171 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.026772 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.026844 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.027043 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzdg\" (UniqueName: \"kubernetes.io/projected/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-api-access-4vzdg\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.027103 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: E1204 15:59:04.097625 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38cb4c2_8301_4304_a2a8_beed07ff5c49.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc38cb4c2_8301_4304_a2a8_beed07ff5c49.slice/crio-7f9576834ee9c5169223feb5448e108ed3396f9388e5dfcc332621772a148f9e\": RecentStats: unable to find data in memory cache]" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.129103 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.129152 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.129210 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vzdg\" (UniqueName: \"kubernetes.io/projected/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-api-access-4vzdg\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.129255 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.134416 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.135688 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.136291 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cca0ce-2eda-43c8-94fa-3a307883e42a-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.150960 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vzdg\" (UniqueName: \"kubernetes.io/projected/07cca0ce-2eda-43c8-94fa-3a307883e42a-kube-api-access-4vzdg\") pod \"kube-state-metrics-0\" (UID: \"07cca0ce-2eda-43c8-94fa-3a307883e42a\") " pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.252050 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.753414 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.805071 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"07cca0ce-2eda-43c8-94fa-3a307883e42a","Type":"ContainerStarted","Data":"b83776381f247a59a0abc85250d1768e99ec84bfb265b9e1f9c3ee427d3b5ea4"} Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.982926 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.983066 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.991517 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 15:59:04 crc kubenswrapper[4878]: I1204 15:59:04.994476 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.066904 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.067241 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-central-agent" containerID="cri-o://ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d" gracePeriod=30 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.067789 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="proxy-httpd" containerID="cri-o://7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b" gracePeriod=30 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.067844 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="sg-core" containerID="cri-o://3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b" gracePeriod=30 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.067903 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-notification-agent" containerID="cri-o://46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574" gracePeriod=30 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.193438 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c38cb4c2-8301-4304-a2a8-beed07ff5c49" path="/var/lib/kubelet/pods/c38cb4c2-8301-4304-a2a8-beed07ff5c49/volumes" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.638100 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.765644 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data\") pod \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.765716 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle\") pod \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.765851 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54w4d\" (UniqueName: \"kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d\") pod \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\" (UID: \"591bb2b3-d6f8-4c4a-80bd-da2654b821f7\") " Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.775444 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d" (OuterVolumeSpecName: "kube-api-access-54w4d") pod "591bb2b3-d6f8-4c4a-80bd-da2654b821f7" (UID: "591bb2b3-d6f8-4c4a-80bd-da2654b821f7"). InnerVolumeSpecName "kube-api-access-54w4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.801408 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "591bb2b3-d6f8-4c4a-80bd-da2654b821f7" (UID: "591bb2b3-d6f8-4c4a-80bd-da2654b821f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.805022 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data" (OuterVolumeSpecName: "config-data") pod "591bb2b3-d6f8-4c4a-80bd-da2654b821f7" (UID: "591bb2b3-d6f8-4c4a-80bd-da2654b821f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.826920 4878 generic.go:334] "Generic (PLEG): container finished" podID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerID="7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b" exitCode=0 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.826960 4878 generic.go:334] "Generic (PLEG): container finished" podID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerID="3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b" exitCode=2 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.826972 4878 generic.go:334] "Generic (PLEG): container finished" podID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerID="ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d" exitCode=0 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.827012 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerDied","Data":"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.827064 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerDied","Data":"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.827080 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerDied","Data":"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.832911 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"07cca0ce-2eda-43c8-94fa-3a307883e42a","Type":"ContainerStarted","Data":"61110429e22d3522ec1200b9d9c9e7cc8e44a51c4eeaff2fd1bcbdb862ecd630"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.833017 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.835972 4878 generic.go:334] "Generic (PLEG): container finished" podID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" containerID="e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3" exitCode=137 Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.836036 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.836084 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"591bb2b3-d6f8-4c4a-80bd-da2654b821f7","Type":"ContainerDied","Data":"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.836116 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"591bb2b3-d6f8-4c4a-80bd-da2654b821f7","Type":"ContainerDied","Data":"3f2d3461e732f496d606956583024a82d4fdff82d168c1b12697f9050b3af05c"} Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.836139 4878 scope.go:117] "RemoveContainer" containerID="e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.857327 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.471366254 podStartE2EDuration="2.857301181s" podCreationTimestamp="2025-12-04 15:59:03 +0000 UTC" firstStartedPulling="2025-12-04 15:59:04.762157143 +0000 UTC m=+1388.724694099" lastFinishedPulling="2025-12-04 15:59:05.14809207 +0000 UTC m=+1389.110629026" observedRunningTime="2025-12-04 15:59:05.853985657 +0000 UTC m=+1389.816522613" watchObservedRunningTime="2025-12-04 15:59:05.857301181 +0000 UTC m=+1389.819838137" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.873081 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.873126 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.873140 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54w4d\" (UniqueName: \"kubernetes.io/projected/591bb2b3-d6f8-4c4a-80bd-da2654b821f7-kube-api-access-54w4d\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.874178 4878 scope.go:117] "RemoveContainer" containerID="e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3" Dec 04 15:59:05 crc kubenswrapper[4878]: E1204 15:59:05.874925 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3\": container with ID starting with e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3 not found: ID does not exist" containerID="e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.874992 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3"} err="failed to get container status \"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3\": rpc error: code = NotFound desc = could not find container \"e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3\": container with ID starting with e0acd619d52d6af237f7afda04f3e4610a6d362158b79adb31bbeb98983b36b3 not found: ID does not exist" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.881819 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.895259 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.906936 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:59:05 crc kubenswrapper[4878]: E1204 15:59:05.907595 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.907620 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.907889 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" containerName="nova-cell1-novncproxy-novncproxy" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.908684 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.911329 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.911487 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.911494 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 04 15:59:05 crc kubenswrapper[4878]: I1204 15:59:05.915490 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.054110 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.054604 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.056379 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.068140 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.077624 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmvx\" (UniqueName: \"kubernetes.io/projected/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-kube-api-access-lcmvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.077679 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.077717 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.077767 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.077885 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.180938 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.181655 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.182385 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmvx\" (UniqueName: \"kubernetes.io/projected/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-kube-api-access-lcmvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.182424 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.182486 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.186019 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.188102 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.188594 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.190168 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.202829 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmvx\" (UniqueName: \"kubernetes.io/projected/efc8fa35-c810-4c40-8a4c-3a4fee3651ab-kube-api-access-lcmvx\") pod \"nova-cell1-novncproxy-0\" (UID: \"efc8fa35-c810-4c40-8a4c-3a4fee3651ab\") " pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.272318 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.815356 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.868733 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"efc8fa35-c810-4c40-8a4c-3a4fee3651ab","Type":"ContainerStarted","Data":"0acea9f21b89efbe9000d3cdf257af4ce9eb5cf937e7a068ea627259bb630995"} Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.869041 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 15:59:06 crc kubenswrapper[4878]: I1204 15:59:06.899567 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.156408 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.161715 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.234981 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="591bb2b3-d6f8-4c4a-80bd-da2654b821f7" path="/var/lib/kubelet/pods/591bb2b3-d6f8-4c4a-80bd-da2654b821f7/volumes" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.235697 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317113 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317226 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g594x\" (UniqueName: \"kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317290 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317317 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317366 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.317570 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.419494 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.419893 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g594x\" (UniqueName: \"kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.419943 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.419970 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.420008 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.420113 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.421246 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.421354 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.421946 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.423083 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.423579 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.458808 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g594x\" (UniqueName: \"kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x\") pod \"dnsmasq-dns-89c5cd4d5-6ngd6\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.559852 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.894476 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"efc8fa35-c810-4c40-8a4c-3a4fee3651ab","Type":"ContainerStarted","Data":"5a271a4947904569573e1d4aa71ceb588cb1e717c6da144e1649c4ffbe399a1f"} Dec 04 15:59:07 crc kubenswrapper[4878]: I1204 15:59:07.928417 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.928389106 podStartE2EDuration="2.928389106s" podCreationTimestamp="2025-12-04 15:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:07.917274565 +0000 UTC m=+1391.879811531" watchObservedRunningTime="2025-12-04 15:59:07.928389106 +0000 UTC m=+1391.890926062" Dec 04 15:59:08 crc kubenswrapper[4878]: W1204 15:59:08.088176 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7119b41d_07a7_4b01_8a58_5b67479d095f.slice/crio-bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda WatchSource:0}: Error finding container bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda: Status 404 returned error can't find the container with id bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda Dec 04 15:59:08 crc kubenswrapper[4878]: I1204 15:59:08.106283 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 15:59:08 crc kubenswrapper[4878]: I1204 15:59:08.906278 4878 generic.go:334] "Generic (PLEG): container finished" podID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerID="d24af50360106a23912099fe8aecf0ca60393daf7d7931be40be9c3e9bae5223" exitCode=0 Dec 04 15:59:08 crc kubenswrapper[4878]: I1204 15:59:08.906375 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" event={"ID":"7119b41d-07a7-4b01-8a58-5b67479d095f","Type":"ContainerDied","Data":"d24af50360106a23912099fe8aecf0ca60393daf7d7931be40be9c3e9bae5223"} Dec 04 15:59:08 crc kubenswrapper[4878]: I1204 15:59:08.906994 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" event={"ID":"7119b41d-07a7-4b01-8a58-5b67479d095f","Type":"ContainerStarted","Data":"bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda"} Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.717620 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.899924 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.919372 4878 generic.go:334] "Generic (PLEG): container finished" podID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerID="46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574" exitCode=0 Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.919461 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.919467 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerDied","Data":"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574"} Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.919550 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"72395eff-28a5-4604-b9ee-36a0c8cf6b37","Type":"ContainerDied","Data":"f097360015310efb9c3269986fac50d0eb09fd926b8a9e5c1480e414624c68b5"} Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.919578 4878 scope.go:117] "RemoveContainer" containerID="7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.923403 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-log" containerID="cri-o://8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2" gracePeriod=30 Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.924151 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" event={"ID":"7119b41d-07a7-4b01-8a58-5b67479d095f","Type":"ContainerStarted","Data":"93b0c37db41560dc9e1f3aabe07f56c095621d8dcc83f3828920f0a16eb17a5b"} Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.924275 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-api" containerID="cri-o://912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9" gracePeriod=30 Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.924506 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.972511 4878 scope.go:117] "RemoveContainer" containerID="3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.978631 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.978698 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.978740 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.978824 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.978866 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.979003 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prfg2\" (UniqueName: \"kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.979057 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data\") pod \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\" (UID: \"72395eff-28a5-4604-b9ee-36a0c8cf6b37\") " Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.979198 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.979243 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.980111 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.980154 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/72395eff-28a5-4604-b9ee-36a0c8cf6b37-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.988126 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts" (OuterVolumeSpecName: "scripts") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:09 crc kubenswrapper[4878]: I1204 15:59:09.997419 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" podStartSLOduration=2.997389438 podStartE2EDuration="2.997389438s" podCreationTimestamp="2025-12-04 15:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:09.969679107 +0000 UTC m=+1393.932216073" watchObservedRunningTime="2025-12-04 15:59:09.997389438 +0000 UTC m=+1393.959926384" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.005731 4878 scope.go:117] "RemoveContainer" containerID="46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.010250 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2" (OuterVolumeSpecName: "kube-api-access-prfg2") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "kube-api-access-prfg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.045096 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.082892 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.082932 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.082945 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prfg2\" (UniqueName: \"kubernetes.io/projected/72395eff-28a5-4604-b9ee-36a0c8cf6b37-kube-api-access-prfg2\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.105866 4878 scope.go:117] "RemoveContainer" containerID="ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.114881 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.126399 4878 scope.go:117] "RemoveContainer" containerID="7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.126842 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b\": container with ID starting with 7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b not found: ID does not exist" containerID="7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.126887 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b"} err="failed to get container status \"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b\": rpc error: code = NotFound desc = could not find container \"7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b\": container with ID starting with 7fa59e636546571865bd0eb6b9e63ebb703e2cc8526c8e7540694cee5275d43b not found: ID does not exist" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.126916 4878 scope.go:117] "RemoveContainer" containerID="3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.127628 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b\": container with ID starting with 3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b not found: ID does not exist" containerID="3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.127662 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b"} err="failed to get container status \"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b\": rpc error: code = NotFound desc = could not find container \"3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b\": container with ID starting with 3bc9478c586e464abba79afec7690643091a9dcb2ab239270f326b10fb92886b not found: ID does not exist" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.127678 4878 scope.go:117] "RemoveContainer" containerID="46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.127991 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574\": container with ID starting with 46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574 not found: ID does not exist" containerID="46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.128043 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574"} err="failed to get container status \"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574\": rpc error: code = NotFound desc = could not find container \"46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574\": container with ID starting with 46a9c34a8cb12d3a70b8e1b8687924b36374910322a22155b0ac70b8d64f8574 not found: ID does not exist" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.128084 4878 scope.go:117] "RemoveContainer" containerID="ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.128912 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d\": container with ID starting with ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d not found: ID does not exist" containerID="ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.128943 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d"} err="failed to get container status \"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d\": rpc error: code = NotFound desc = could not find container \"ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d\": container with ID starting with ae531f4eb241f8528850a0cc5e5900e31a73f63bec182fa386822787e509be9d not found: ID does not exist" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.135887 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data" (OuterVolumeSpecName: "config-data") pod "72395eff-28a5-4604-b9ee-36a0c8cf6b37" (UID: "72395eff-28a5-4604-b9ee-36a0c8cf6b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.185070 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.185107 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72395eff-28a5-4604-b9ee-36a0c8cf6b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.312622 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.325577 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.341744 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.342242 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-notification-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342267 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-notification-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.342288 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="sg-core" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342294 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="sg-core" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.342308 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="proxy-httpd" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342314 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="proxy-httpd" Dec 04 15:59:10 crc kubenswrapper[4878]: E1204 15:59:10.342327 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-central-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342333 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-central-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342670 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-notification-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342694 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="proxy-httpd" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342714 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="sg-core" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.342731 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" containerName="ceilometer-central-agent" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.344747 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.361590 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.362008 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.362357 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.394680 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.501699 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2nn\" (UniqueName: \"kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.502514 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.502739 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.502847 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.503027 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.503059 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.503113 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.503238 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.604974 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605370 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2nn\" (UniqueName: \"kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605402 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605453 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605494 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605563 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605588 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.605623 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.606431 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.606461 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.611339 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.611412 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.612492 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.612660 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.614196 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.629990 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2nn\" (UniqueName: \"kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn\") pod \"ceilometer-0\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.715547 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.943347 4878 generic.go:334] "Generic (PLEG): container finished" podID="0969df37-1474-46f7-b310-1611b12d6396" containerID="8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2" exitCode=143 Dec 04 15:59:10 crc kubenswrapper[4878]: I1204 15:59:10.943583 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerDied","Data":"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2"} Dec 04 15:59:11 crc kubenswrapper[4878]: I1204 15:59:11.191017 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72395eff-28a5-4604-b9ee-36a0c8cf6b37" path="/var/lib/kubelet/pods/72395eff-28a5-4604-b9ee-36a0c8cf6b37/volumes" Dec 04 15:59:11 crc kubenswrapper[4878]: I1204 15:59:11.200283 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:11 crc kubenswrapper[4878]: W1204 15:59:11.203837 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9985845_2886_4e2f_811a_0b306199f949.slice/crio-877f3a7f669e9f8cf4be700afe66cdfcdd87932e502724f20e7fcf1d83238157 WatchSource:0}: Error finding container 877f3a7f669e9f8cf4be700afe66cdfcdd87932e502724f20e7fcf1d83238157: Status 404 returned error can't find the container with id 877f3a7f669e9f8cf4be700afe66cdfcdd87932e502724f20e7fcf1d83238157 Dec 04 15:59:11 crc kubenswrapper[4878]: I1204 15:59:11.273100 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:11 crc kubenswrapper[4878]: I1204 15:59:11.955017 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerStarted","Data":"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37"} Dec 04 15:59:11 crc kubenswrapper[4878]: I1204 15:59:11.955335 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerStarted","Data":"877f3a7f669e9f8cf4be700afe66cdfcdd87932e502724f20e7fcf1d83238157"} Dec 04 15:59:12 crc kubenswrapper[4878]: I1204 15:59:12.201813 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:12 crc kubenswrapper[4878]: I1204 15:59:12.971045 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerStarted","Data":"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed"} Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.611927 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.781814 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs\") pod \"0969df37-1474-46f7-b310-1611b12d6396\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.782064 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj7cl\" (UniqueName: \"kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl\") pod \"0969df37-1474-46f7-b310-1611b12d6396\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.782133 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle\") pod \"0969df37-1474-46f7-b310-1611b12d6396\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.782306 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data\") pod \"0969df37-1474-46f7-b310-1611b12d6396\" (UID: \"0969df37-1474-46f7-b310-1611b12d6396\") " Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.782397 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs" (OuterVolumeSpecName: "logs") pod "0969df37-1474-46f7-b310-1611b12d6396" (UID: "0969df37-1474-46f7-b310-1611b12d6396"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.783614 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0969df37-1474-46f7-b310-1611b12d6396-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.794132 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl" (OuterVolumeSpecName: "kube-api-access-kj7cl") pod "0969df37-1474-46f7-b310-1611b12d6396" (UID: "0969df37-1474-46f7-b310-1611b12d6396"). InnerVolumeSpecName "kube-api-access-kj7cl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.815326 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data" (OuterVolumeSpecName: "config-data") pod "0969df37-1474-46f7-b310-1611b12d6396" (UID: "0969df37-1474-46f7-b310-1611b12d6396"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.838170 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0969df37-1474-46f7-b310-1611b12d6396" (UID: "0969df37-1474-46f7-b310-1611b12d6396"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.885967 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj7cl\" (UniqueName: \"kubernetes.io/projected/0969df37-1474-46f7-b310-1611b12d6396-kube-api-access-kj7cl\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.886010 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.886021 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0969df37-1474-46f7-b310-1611b12d6396-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.984269 4878 generic.go:334] "Generic (PLEG): container finished" podID="0969df37-1474-46f7-b310-1611b12d6396" containerID="912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9" exitCode=0 Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.984362 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.984373 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerDied","Data":"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9"} Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.985638 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0969df37-1474-46f7-b310-1611b12d6396","Type":"ContainerDied","Data":"0cb70926d840638e67b26127402a7ed79bfffb8510918b4e79b8e949a52afd89"} Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.985671 4878 scope.go:117] "RemoveContainer" containerID="912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9" Dec 04 15:59:13 crc kubenswrapper[4878]: I1204 15:59:13.987936 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerStarted","Data":"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759"} Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.040424 4878 scope.go:117] "RemoveContainer" containerID="8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.040777 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.051909 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.076329 4878 scope.go:117] "RemoveContainer" containerID="912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9" Dec 04 15:59:14 crc kubenswrapper[4878]: E1204 15:59:14.080463 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9\": container with ID starting with 912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9 not found: ID does not exist" containerID="912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.080506 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9"} err="failed to get container status \"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9\": rpc error: code = NotFound desc = could not find container \"912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9\": container with ID starting with 912ee3d166ba9931ec8039bc03ba8ab118f007b9d486607796c0bdef7fe2d0d9 not found: ID does not exist" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.080535 4878 scope.go:117] "RemoveContainer" containerID="8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2" Dec 04 15:59:14 crc kubenswrapper[4878]: E1204 15:59:14.081203 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2\": container with ID starting with 8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2 not found: ID does not exist" containerID="8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.081234 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2"} err="failed to get container status \"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2\": rpc error: code = NotFound desc = could not find container \"8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2\": container with ID starting with 8d24a40f0fa7169eb5e62ea59f766645cbd3b770091afcc6705cb609ced3c9c2 not found: ID does not exist" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.089931 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:14 crc kubenswrapper[4878]: E1204 15:59:14.090481 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-log" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.090506 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-log" Dec 04 15:59:14 crc kubenswrapper[4878]: E1204 15:59:14.090521 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-api" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.090527 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-api" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.090775 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-api" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.090800 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="0969df37-1474-46f7-b310-1611b12d6396" containerName="nova-api-log" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.091911 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.096925 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.096998 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.097310 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.099864 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192683 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192756 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192824 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192887 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192914 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4zxq\" (UniqueName: \"kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.192952 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.264371 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295390 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295472 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295519 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4zxq\" (UniqueName: \"kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295560 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295651 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.295677 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.301065 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.323225 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.323947 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.324383 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.325790 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.329018 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4zxq\" (UniqueName: \"kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq\") pod \"nova-api-0\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.421024 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:14 crc kubenswrapper[4878]: I1204 15:59:14.890656 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:15 crc kubenswrapper[4878]: I1204 15:59:15.000297 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerStarted","Data":"8e6fd6b5f2d8113e4127f33d9fe3435556d1ca1778434f272ecb40ecb3c0a258"} Dec 04 15:59:15 crc kubenswrapper[4878]: I1204 15:59:15.198751 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0969df37-1474-46f7-b310-1611b12d6396" path="/var/lib/kubelet/pods/0969df37-1474-46f7-b310-1611b12d6396/volumes" Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.016146 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerStarted","Data":"a5ad74f1fa693d7589fb7d66686c4830ca8abc49d7f0d51e3fe87c8820c6b7ee"} Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.016219 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerStarted","Data":"109295c711f646496e794115f3c110bfeae09482997ac849417d8798f3f0620b"} Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.026837 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerStarted","Data":"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6"} Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.027130 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.027156 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-central-agent" containerID="cri-o://6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37" gracePeriod=30 Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.027178 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="sg-core" containerID="cri-o://0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759" gracePeriod=30 Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.027206 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="proxy-httpd" containerID="cri-o://0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6" gracePeriod=30 Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.027178 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-notification-agent" containerID="cri-o://be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed" gracePeriod=30 Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.040993 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.040972831 podStartE2EDuration="2.040972831s" podCreationTimestamp="2025-12-04 15:59:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:16.040192751 +0000 UTC m=+1400.002729707" watchObservedRunningTime="2025-12-04 15:59:16.040972831 +0000 UTC m=+1400.003509787" Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.073085 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5921794929999997 podStartE2EDuration="6.073054012s" podCreationTimestamp="2025-12-04 15:59:10 +0000 UTC" firstStartedPulling="2025-12-04 15:59:11.206400506 +0000 UTC m=+1395.168937462" lastFinishedPulling="2025-12-04 15:59:14.687275015 +0000 UTC m=+1398.649811981" observedRunningTime="2025-12-04 15:59:16.066090846 +0000 UTC m=+1400.028627802" watchObservedRunningTime="2025-12-04 15:59:16.073054012 +0000 UTC m=+1400.035590978" Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.273290 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:16 crc kubenswrapper[4878]: I1204 15:59:16.296945 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.043241 4878 generic.go:334] "Generic (PLEG): container finished" podID="e9985845-2886-4e2f-811a-0b306199f949" containerID="0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6" exitCode=0 Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.044164 4878 generic.go:334] "Generic (PLEG): container finished" podID="e9985845-2886-4e2f-811a-0b306199f949" containerID="0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759" exitCode=2 Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.044295 4878 generic.go:334] "Generic (PLEG): container finished" podID="e9985845-2886-4e2f-811a-0b306199f949" containerID="be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed" exitCode=0 Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.043442 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerDied","Data":"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6"} Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.044542 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerDied","Data":"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759"} Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.044565 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerDied","Data":"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed"} Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.063430 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.274039 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-hh8f6"] Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.284662 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.286190 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hh8f6"] Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.291225 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.291347 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.376085 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.376166 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.376527 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.376984 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qb87\" (UniqueName: \"kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.479061 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qb87\" (UniqueName: \"kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.479130 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.479164 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.479228 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.486782 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.487136 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.502654 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.503246 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qb87\" (UniqueName: \"kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87\") pod \"nova-cell1-cell-mapping-hh8f6\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.562046 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.608350 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.627208 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:59:17 crc kubenswrapper[4878]: I1204 15:59:17.627531 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="dnsmasq-dns" containerID="cri-o://1b4be42dbfa0d664c473104873bfd90f7f7d030bd760cc57180dd2e9507fa369" gracePeriod=10 Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.069473 4878 generic.go:334] "Generic (PLEG): container finished" podID="8be76ef1-f903-46e5-a874-641b88528cb6" containerID="1b4be42dbfa0d664c473104873bfd90f7f7d030bd760cc57180dd2e9507fa369" exitCode=0 Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.070622 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" event={"ID":"8be76ef1-f903-46e5-a874-641b88528cb6","Type":"ContainerDied","Data":"1b4be42dbfa0d664c473104873bfd90f7f7d030bd760cc57180dd2e9507fa369"} Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.130521 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-hh8f6"] Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.206849 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309164 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309569 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309600 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwjf\" (UniqueName: \"kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309630 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309683 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.309813 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config\") pod \"8be76ef1-f903-46e5-a874-641b88528cb6\" (UID: \"8be76ef1-f903-46e5-a874-641b88528cb6\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.353495 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf" (OuterVolumeSpecName: "kube-api-access-6nwjf") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "kube-api-access-6nwjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.421574 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nwjf\" (UniqueName: \"kubernetes.io/projected/8be76ef1-f903-46e5-a874-641b88528cb6-kube-api-access-6nwjf\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.427411 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.439038 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.444223 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.450327 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config" (OuterVolumeSpecName: "config") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.476546 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8be76ef1-f903-46e5-a874-641b88528cb6" (UID: "8be76ef1-f903-46e5-a874-641b88528cb6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.524668 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.524715 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.524729 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.524742 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.524753 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be76ef1-f903-46e5-a874-641b88528cb6-config\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.592806 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.728825 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.728907 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729066 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729135 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729172 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh2nn\" (UniqueName: \"kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729199 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729222 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.729271 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts\") pod \"e9985845-2886-4e2f-811a-0b306199f949\" (UID: \"e9985845-2886-4e2f-811a-0b306199f949\") " Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.730891 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.731227 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.751615 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts" (OuterVolumeSpecName: "scripts") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.757079 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn" (OuterVolumeSpecName: "kube-api-access-vh2nn") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "kube-api-access-vh2nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.786908 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.789847 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.824848 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832235 4878 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832277 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh2nn\" (UniqueName: \"kubernetes.io/projected/e9985845-2886-4e2f-811a-0b306199f949-kube-api-access-vh2nn\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832294 4878 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e9985845-2886-4e2f-811a-0b306199f949-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832308 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832319 4878 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832331 4878 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.832344 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.840303 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data" (OuterVolumeSpecName: "config-data") pod "e9985845-2886-4e2f-811a-0b306199f949" (UID: "e9985845-2886-4e2f-811a-0b306199f949"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:18 crc kubenswrapper[4878]: I1204 15:59:18.934302 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9985845-2886-4e2f-811a-0b306199f949-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.085752 4878 generic.go:334] "Generic (PLEG): container finished" podID="e9985845-2886-4e2f-811a-0b306199f949" containerID="6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37" exitCode=0 Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.085823 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.085824 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerDied","Data":"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37"} Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.085906 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e9985845-2886-4e2f-811a-0b306199f949","Type":"ContainerDied","Data":"877f3a7f669e9f8cf4be700afe66cdfcdd87932e502724f20e7fcf1d83238157"} Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.085927 4878 scope.go:117] "RemoveContainer" containerID="0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.096722 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hh8f6" event={"ID":"59a41a73-2e70-46ab-9608-523d804673b9","Type":"ContainerStarted","Data":"e193f9008f8e6a3781474bc471c383f4c23114a29ca653bf57a8475442f375b9"} Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.096775 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hh8f6" event={"ID":"59a41a73-2e70-46ab-9608-523d804673b9","Type":"ContainerStarted","Data":"f3deed83b076030d9be5e15fa7c70d91226ba2c9fd892ed06f57a84bc9e94a0b"} Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.116714 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-hh8f6" podStartSLOduration=2.116689447 podStartE2EDuration="2.116689447s" podCreationTimestamp="2025-12-04 15:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:19.115856756 +0000 UTC m=+1403.078393722" watchObservedRunningTime="2025-12-04 15:59:19.116689447 +0000 UTC m=+1403.079226403" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.122026 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" event={"ID":"8be76ef1-f903-46e5-a874-641b88528cb6","Type":"ContainerDied","Data":"07925f279cf77c7c82ebf3efa1d7d163635d198d53b253d570e645077d4b45ea"} Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.122225 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-8rl4z" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.154400 4878 scope.go:117] "RemoveContainer" containerID="0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.173566 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.207633 4878 scope.go:117] "RemoveContainer" containerID="be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.226547 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.229980 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.240244 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.240994 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="proxy-httpd" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241018 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="proxy-httpd" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.241041 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="sg-core" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241047 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="sg-core" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.241058 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="dnsmasq-dns" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241063 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="dnsmasq-dns" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.241093 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-notification-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241099 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-notification-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.241114 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="init" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241120 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="init" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.241142 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-central-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241148 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-central-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241375 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" containerName="dnsmasq-dns" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241399 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="sg-core" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241415 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="proxy-httpd" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241425 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-notification-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.241437 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9985845-2886-4e2f-811a-0b306199f949" containerName="ceilometer-central-agent" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.245382 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.246118 4878 scope.go:117] "RemoveContainer" containerID="6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.248481 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.249681 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.250371 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.254663 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-8rl4z"] Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.266382 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.277995 4878 scope.go:117] "RemoveContainer" containerID="0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.278316 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6\": container with ID starting with 0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6 not found: ID does not exist" containerID="0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.278352 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6"} err="failed to get container status \"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6\": rpc error: code = NotFound desc = could not find container \"0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6\": container with ID starting with 0515d5ca89e4b78c527b62cef5004b82fe61a615e16dfc029e707481e8249fb6 not found: ID does not exist" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.278393 4878 scope.go:117] "RemoveContainer" containerID="0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.278773 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759\": container with ID starting with 0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759 not found: ID does not exist" containerID="0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.278807 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759"} err="failed to get container status \"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759\": rpc error: code = NotFound desc = could not find container \"0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759\": container with ID starting with 0517e27ad8537abe4641e016ea723645677b476153a838a8ba40bbf8fb7f3759 not found: ID does not exist" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.278827 4878 scope.go:117] "RemoveContainer" containerID="be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.282239 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed\": container with ID starting with be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed not found: ID does not exist" containerID="be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.282283 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed"} err="failed to get container status \"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed\": rpc error: code = NotFound desc = could not find container \"be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed\": container with ID starting with be2139749fa1717468958e13249953246369376984c7148cb77da6fc25dfeaed not found: ID does not exist" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.282305 4878 scope.go:117] "RemoveContainer" containerID="6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37" Dec 04 15:59:19 crc kubenswrapper[4878]: E1204 15:59:19.282977 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37\": container with ID starting with 6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37 not found: ID does not exist" containerID="6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.283006 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37"} err="failed to get container status \"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37\": rpc error: code = NotFound desc = could not find container \"6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37\": container with ID starting with 6af56ae5139204a12042190daf99c7dc9ce405049307972eb96623e5e5b96e37 not found: ID does not exist" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.283026 4878 scope.go:117] "RemoveContainer" containerID="1b4be42dbfa0d664c473104873bfd90f7f7d030bd760cc57180dd2e9507fa369" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.309953 4878 scope.go:117] "RemoveContainer" containerID="7bbbba7e6a411a4c28d87a32d91e39fce277f4ecb5af481919af06a783e3ae95" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346550 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-scripts\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346595 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346700 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmrsr\" (UniqueName: \"kubernetes.io/projected/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-kube-api-access-gmrsr\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346740 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346765 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346785 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346817 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-config-data\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.346852 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449027 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-config-data\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449101 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449204 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-scripts\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449231 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449324 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmrsr\" (UniqueName: \"kubernetes.io/projected/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-kube-api-access-gmrsr\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449378 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449408 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449441 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.449982 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-run-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.450416 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-log-httpd\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.454113 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-scripts\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.454209 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.454438 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-config-data\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.454706 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.476120 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.485416 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmrsr\" (UniqueName: \"kubernetes.io/projected/2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3-kube-api-access-gmrsr\") pod \"ceilometer-0\" (UID: \"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3\") " pod="openstack/ceilometer-0" Dec 04 15:59:19 crc kubenswrapper[4878]: I1204 15:59:19.564560 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 04 15:59:20 crc kubenswrapper[4878]: I1204 15:59:20.060676 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 04 15:59:20 crc kubenswrapper[4878]: W1204 15:59:20.064029 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bbda1b6_b67d_45f7_ba2f_1bc7ddf5dda3.slice/crio-186c1a97f4fe72c9812d3d8a8189e1bc6b3abcb5eed79a34a8d6ab8d6a72058f WatchSource:0}: Error finding container 186c1a97f4fe72c9812d3d8a8189e1bc6b3abcb5eed79a34a8d6ab8d6a72058f: Status 404 returned error can't find the container with id 186c1a97f4fe72c9812d3d8a8189e1bc6b3abcb5eed79a34a8d6ab8d6a72058f Dec 04 15:59:20 crc kubenswrapper[4878]: I1204 15:59:20.137753 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3","Type":"ContainerStarted","Data":"186c1a97f4fe72c9812d3d8a8189e1bc6b3abcb5eed79a34a8d6ab8d6a72058f"} Dec 04 15:59:21 crc kubenswrapper[4878]: I1204 15:59:21.151444 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3","Type":"ContainerStarted","Data":"14fb0baf546326d591a41f1937fa04714fba9cf6d6bc7963a74fecd6a32e8b7d"} Dec 04 15:59:21 crc kubenswrapper[4878]: I1204 15:59:21.192000 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be76ef1-f903-46e5-a874-641b88528cb6" path="/var/lib/kubelet/pods/8be76ef1-f903-46e5-a874-641b88528cb6/volumes" Dec 04 15:59:21 crc kubenswrapper[4878]: I1204 15:59:21.193735 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9985845-2886-4e2f-811a-0b306199f949" path="/var/lib/kubelet/pods/e9985845-2886-4e2f-811a-0b306199f949/volumes" Dec 04 15:59:22 crc kubenswrapper[4878]: I1204 15:59:22.168039 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3","Type":"ContainerStarted","Data":"d35dff8d785b524e946d5212f4d2f66e7909fcffae92adcb33b3d48988fc8441"} Dec 04 15:59:22 crc kubenswrapper[4878]: I1204 15:59:22.168379 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3","Type":"ContainerStarted","Data":"b8f1ff6285f276a9e367c711183c48bd8b6581eda1ad610144fb8099d57adb0b"} Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.192231 4878 generic.go:334] "Generic (PLEG): container finished" podID="59a41a73-2e70-46ab-9608-523d804673b9" containerID="e193f9008f8e6a3781474bc471c383f4c23114a29ca653bf57a8475442f375b9" exitCode=0 Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.192746 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hh8f6" event={"ID":"59a41a73-2e70-46ab-9608-523d804673b9","Type":"ContainerDied","Data":"e193f9008f8e6a3781474bc471c383f4c23114a29ca653bf57a8475442f375b9"} Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.196317 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3","Type":"ContainerStarted","Data":"da566ee7f25cadc301f8563dc01dcabffa6916abe5bcc7034ad23f4f4286b645"} Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.197497 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.247252 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.151717549 podStartE2EDuration="5.247224364s" podCreationTimestamp="2025-12-04 15:59:19 +0000 UTC" firstStartedPulling="2025-12-04 15:59:20.066509761 +0000 UTC m=+1404.029046717" lastFinishedPulling="2025-12-04 15:59:23.162016576 +0000 UTC m=+1407.124553532" observedRunningTime="2025-12-04 15:59:24.243755506 +0000 UTC m=+1408.206292472" watchObservedRunningTime="2025-12-04 15:59:24.247224364 +0000 UTC m=+1408.209761320" Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.423370 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:59:24 crc kubenswrapper[4878]: I1204 15:59:24.423419 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.444076 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.444765 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.199:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.601635 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.713101 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qb87\" (UniqueName: \"kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87\") pod \"59a41a73-2e70-46ab-9608-523d804673b9\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.713423 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data\") pod \"59a41a73-2e70-46ab-9608-523d804673b9\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.713455 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle\") pod \"59a41a73-2e70-46ab-9608-523d804673b9\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.713518 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts\") pod \"59a41a73-2e70-46ab-9608-523d804673b9\" (UID: \"59a41a73-2e70-46ab-9608-523d804673b9\") " Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.733151 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87" (OuterVolumeSpecName: "kube-api-access-7qb87") pod "59a41a73-2e70-46ab-9608-523d804673b9" (UID: "59a41a73-2e70-46ab-9608-523d804673b9"). InnerVolumeSpecName "kube-api-access-7qb87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.740722 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts" (OuterVolumeSpecName: "scripts") pod "59a41a73-2e70-46ab-9608-523d804673b9" (UID: "59a41a73-2e70-46ab-9608-523d804673b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.761094 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59a41a73-2e70-46ab-9608-523d804673b9" (UID: "59a41a73-2e70-46ab-9608-523d804673b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.782833 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data" (OuterVolumeSpecName: "config-data") pod "59a41a73-2e70-46ab-9608-523d804673b9" (UID: "59a41a73-2e70-46ab-9608-523d804673b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.817282 4878 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-scripts\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.817319 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qb87\" (UniqueName: \"kubernetes.io/projected/59a41a73-2e70-46ab-9608-523d804673b9-kube-api-access-7qb87\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.817330 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:25 crc kubenswrapper[4878]: I1204 15:59:25.817339 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59a41a73-2e70-46ab-9608-523d804673b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.219111 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-hh8f6" Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.220477 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-hh8f6" event={"ID":"59a41a73-2e70-46ab-9608-523d804673b9","Type":"ContainerDied","Data":"f3deed83b076030d9be5e15fa7c70d91226ba2c9fd892ed06f57a84bc9e94a0b"} Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.220517 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3deed83b076030d9be5e15fa7c70d91226ba2c9fd892ed06f57a84bc9e94a0b" Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.403579 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.403873 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-log" containerID="cri-o://109295c711f646496e794115f3c110bfeae09482997ac849417d8798f3f0620b" gracePeriod=30 Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.404022 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-api" containerID="cri-o://a5ad74f1fa693d7589fb7d66686c4830ca8abc49d7f0d51e3fe87c8820c6b7ee" gracePeriod=30 Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.429573 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.430185 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="170a9b65-ba8b-46ac-905b-610256381bb0" containerName="nova-scheduler-scheduler" containerID="cri-o://98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d" gracePeriod=30 Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.507452 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.507782 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" containerID="cri-o://1b0458df873454ec88f35bb0fceecaf9343b892b09979efb1ea96826fe8decc8" gracePeriod=30 Dec 04 15:59:26 crc kubenswrapper[4878]: I1204 15:59:26.508613 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" containerID="cri-o://ac67d8995b476ce906e3a896cb0630c28dbbeb5587ae362bbf75156d8d062247" gracePeriod=30 Dec 04 15:59:27 crc kubenswrapper[4878]: I1204 15:59:27.229034 4878 generic.go:334] "Generic (PLEG): container finished" podID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerID="1b0458df873454ec88f35bb0fceecaf9343b892b09979efb1ea96826fe8decc8" exitCode=143 Dec 04 15:59:27 crc kubenswrapper[4878]: I1204 15:59:27.229111 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerDied","Data":"1b0458df873454ec88f35bb0fceecaf9343b892b09979efb1ea96826fe8decc8"} Dec 04 15:59:27 crc kubenswrapper[4878]: I1204 15:59:27.231615 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5481178-e051-445f-bb17-64c74007dc15" containerID="109295c711f646496e794115f3c110bfeae09482997ac849417d8798f3f0620b" exitCode=143 Dec 04 15:59:27 crc kubenswrapper[4878]: I1204 15:59:27.231695 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerDied","Data":"109295c711f646496e794115f3c110bfeae09482997ac849417d8798f3f0620b"} Dec 04 15:59:28 crc kubenswrapper[4878]: I1204 15:59:28.891169 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:59:28 crc kubenswrapper[4878]: I1204 15:59:28.982809 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data\") pod \"170a9b65-ba8b-46ac-905b-610256381bb0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " Dec 04 15:59:28 crc kubenswrapper[4878]: I1204 15:59:28.983035 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp2xg\" (UniqueName: \"kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg\") pod \"170a9b65-ba8b-46ac-905b-610256381bb0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " Dec 04 15:59:28 crc kubenswrapper[4878]: I1204 15:59:28.983130 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle\") pod \"170a9b65-ba8b-46ac-905b-610256381bb0\" (UID: \"170a9b65-ba8b-46ac-905b-610256381bb0\") " Dec 04 15:59:28 crc kubenswrapper[4878]: I1204 15:59:28.989683 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg" (OuterVolumeSpecName: "kube-api-access-xp2xg") pod "170a9b65-ba8b-46ac-905b-610256381bb0" (UID: "170a9b65-ba8b-46ac-905b-610256381bb0"). InnerVolumeSpecName "kube-api-access-xp2xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.018968 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data" (OuterVolumeSpecName: "config-data") pod "170a9b65-ba8b-46ac-905b-610256381bb0" (UID: "170a9b65-ba8b-46ac-905b-610256381bb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.022017 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "170a9b65-ba8b-46ac-905b-610256381bb0" (UID: "170a9b65-ba8b-46ac-905b-610256381bb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.085341 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.085384 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170a9b65-ba8b-46ac-905b-610256381bb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.085394 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp2xg\" (UniqueName: \"kubernetes.io/projected/170a9b65-ba8b-46ac-905b-610256381bb0-kube-api-access-xp2xg\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.258022 4878 generic.go:334] "Generic (PLEG): container finished" podID="170a9b65-ba8b-46ac-905b-610256381bb0" containerID="98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d" exitCode=0 Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.258067 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"170a9b65-ba8b-46ac-905b-610256381bb0","Type":"ContainerDied","Data":"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d"} Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.258095 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"170a9b65-ba8b-46ac-905b-610256381bb0","Type":"ContainerDied","Data":"15a0b793d6cfc601794f1cb2d3a323b197b357973dd055829e746912f6cc9af6"} Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.258114 4878 scope.go:117] "RemoveContainer" containerID="98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.258159 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.295795 4878 scope.go:117] "RemoveContainer" containerID="98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.299046 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:29 crc kubenswrapper[4878]: E1204 15:59:29.299125 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d\": container with ID starting with 98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d not found: ID does not exist" containerID="98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.299181 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d"} err="failed to get container status \"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d\": rpc error: code = NotFound desc = could not find container \"98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d\": container with ID starting with 98d38971c89a39898510f7a55c744d0f67cd10c25f37a36d3b77fcaa3a506e9d not found: ID does not exist" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.318344 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.332054 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:29 crc kubenswrapper[4878]: E1204 15:59:29.333049 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170a9b65-ba8b-46ac-905b-610256381bb0" containerName="nova-scheduler-scheduler" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.333072 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="170a9b65-ba8b-46ac-905b-610256381bb0" containerName="nova-scheduler-scheduler" Dec 04 15:59:29 crc kubenswrapper[4878]: E1204 15:59:29.333087 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a41a73-2e70-46ab-9608-523d804673b9" containerName="nova-manage" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.333094 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a41a73-2e70-46ab-9608-523d804673b9" containerName="nova-manage" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.333311 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="170a9b65-ba8b-46ac-905b-610256381bb0" containerName="nova-scheduler-scheduler" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.333328 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a41a73-2e70-46ab-9608-523d804673b9" containerName="nova-manage" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.334043 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.337279 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.352495 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.503996 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8blb7\" (UniqueName: \"kubernetes.io/projected/d111affc-9e51-4123-8f21-138b844702db-kube-api-access-8blb7\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.504209 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-config-data\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.504565 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.607253 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8blb7\" (UniqueName: \"kubernetes.io/projected/d111affc-9e51-4123-8f21-138b844702db-kube-api-access-8blb7\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.607394 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-config-data\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.607508 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.612457 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.612492 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d111affc-9e51-4123-8f21-138b844702db-config-data\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.629386 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8blb7\" (UniqueName: \"kubernetes.io/projected/d111affc-9e51-4123-8f21-138b844702db-kube-api-access-8blb7\") pod \"nova-scheduler-0\" (UID: \"d111affc-9e51-4123-8f21-138b844702db\") " pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.666280 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.959441 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": dial tcp 10.217.0.192:8775: connect: connection refused" Dec 04 15:59:29 crc kubenswrapper[4878]: I1204 15:59:29.959452 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.192:8775/\": dial tcp 10.217.0.192:8775: connect: connection refused" Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.160773 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.279960 4878 generic.go:334] "Generic (PLEG): container finished" podID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerID="ac67d8995b476ce906e3a896cb0630c28dbbeb5587ae362bbf75156d8d062247" exitCode=0 Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.280035 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerDied","Data":"ac67d8995b476ce906e3a896cb0630c28dbbeb5587ae362bbf75156d8d062247"} Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.281130 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d111affc-9e51-4123-8f21-138b844702db","Type":"ContainerStarted","Data":"34ad17ec277cc0e712b8d64d024daf979bd3cd7b63e96569ffbdde903f0207a4"} Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.843092 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 15:59:30 crc kubenswrapper[4878]: I1204 15:59:30.843370 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.005552 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.068640 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs\") pod \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.068690 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle\") pod \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.068731 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs\") pod \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.068823 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hx2d\" (UniqueName: \"kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d\") pod \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.069639 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs" (OuterVolumeSpecName: "logs") pod "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" (UID: "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.075443 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d" (OuterVolumeSpecName: "kube-api-access-9hx2d") pod "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" (UID: "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b"). InnerVolumeSpecName "kube-api-access-9hx2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.145263 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" (UID: "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.170147 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data\") pod \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\" (UID: \"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.170863 4878 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.170879 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.170911 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hx2d\" (UniqueName: \"kubernetes.io/projected/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-kube-api-access-9hx2d\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.171361 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" (UID: "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.194446 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170a9b65-ba8b-46ac-905b-610256381bb0" path="/var/lib/kubelet/pods/170a9b65-ba8b-46ac-905b-610256381bb0/volumes" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.210550 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data" (OuterVolumeSpecName: "config-data") pod "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" (UID: "c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.272537 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.272582 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.310471 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.312057 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b","Type":"ContainerDied","Data":"234d75e4bffa7469f4c33e0d391c72c57855987341893a6007d24e1ff3a05421"} Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.312163 4878 scope.go:117] "RemoveContainer" containerID="ac67d8995b476ce906e3a896cb0630c28dbbeb5587ae362bbf75156d8d062247" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.323423 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5481178-e051-445f-bb17-64c74007dc15" containerID="a5ad74f1fa693d7589fb7d66686c4830ca8abc49d7f0d51e3fe87c8820c6b7ee" exitCode=0 Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.323505 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerDied","Data":"a5ad74f1fa693d7589fb7d66686c4830ca8abc49d7f0d51e3fe87c8820c6b7ee"} Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.327417 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d111affc-9e51-4123-8f21-138b844702db","Type":"ContainerStarted","Data":"164ad0cf5a6d6e7fe8358e8f956891b613ad6d38825b1b34ca089c380dfb7eba"} Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.354023 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.354000069 podStartE2EDuration="2.354000069s" podCreationTimestamp="2025-12-04 15:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:31.349626448 +0000 UTC m=+1415.312163404" watchObservedRunningTime="2025-12-04 15:59:31.354000069 +0000 UTC m=+1415.316537025" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.381954 4878 scope.go:117] "RemoveContainer" containerID="1b0458df873454ec88f35bb0fceecaf9343b892b09979efb1ea96826fe8decc8" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.400990 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.437675 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.454350 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:31 crc kubenswrapper[4878]: E1204 15:59:31.455008 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.455037 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" Dec 04 15:59:31 crc kubenswrapper[4878]: E1204 15:59:31.455099 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.455110 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.455294 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-log" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.455318 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" containerName="nova-metadata-metadata" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.456508 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.462776 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.463323 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.464924 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.520405 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589083 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589181 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589243 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4zxq\" (UniqueName: \"kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589350 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589393 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589561 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs\") pod \"b5481178-e051-445f-bb17-64c74007dc15\" (UID: \"b5481178-e051-445f-bb17-64c74007dc15\") " Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.589583 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs" (OuterVolumeSpecName: "logs") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590081 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590143 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-config-data\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590199 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590693 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw5xk\" (UniqueName: \"kubernetes.io/projected/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-kube-api-access-kw5xk\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590761 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-logs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.590949 4878 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5481178-e051-445f-bb17-64c74007dc15-logs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.594544 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq" (OuterVolumeSpecName: "kube-api-access-f4zxq") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "kube-api-access-f4zxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.624167 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.637497 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data" (OuterVolumeSpecName: "config-data") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.655085 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.657816 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b5481178-e051-445f-bb17-64c74007dc15" (UID: "b5481178-e051-445f-bb17-64c74007dc15"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.698333 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw5xk\" (UniqueName: \"kubernetes.io/projected/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-kube-api-access-kw5xk\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.698836 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-logs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.699439 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.699628 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-config-data\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.699746 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-logs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.700024 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.702534 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4zxq\" (UniqueName: \"kubernetes.io/projected/b5481178-e051-445f-bb17-64c74007dc15-kube-api-access-f4zxq\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.702719 4878 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.702764 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.702776 4878 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.702786 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5481178-e051-445f-bb17-64c74007dc15-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.706645 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-config-data\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.709648 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.720473 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.721819 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw5xk\" (UniqueName: \"kubernetes.io/projected/b3a8cd5b-1c6a-4278-b66d-a0b0802e1546-kube-api-access-kw5xk\") pod \"nova-metadata-0\" (UID: \"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546\") " pod="openstack/nova-metadata-0" Dec 04 15:59:31 crc kubenswrapper[4878]: I1204 15:59:31.782501 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.289675 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.339109 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546","Type":"ContainerStarted","Data":"3352e2571e407f1d87fc942b417ac5f0d3d7b3bbce361db4bf8595e32e797be9"} Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.343405 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b5481178-e051-445f-bb17-64c74007dc15","Type":"ContainerDied","Data":"8e6fd6b5f2d8113e4127f33d9fe3435556d1ca1778434f272ecb40ecb3c0a258"} Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.343458 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.343467 4878 scope.go:117] "RemoveContainer" containerID="a5ad74f1fa693d7589fb7d66686c4830ca8abc49d7f0d51e3fe87c8820c6b7ee" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.371950 4878 scope.go:117] "RemoveContainer" containerID="109295c711f646496e794115f3c110bfeae09482997ac849417d8798f3f0620b" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.412006 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.431848 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.455023 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:32 crc kubenswrapper[4878]: E1204 15:59:32.455623 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-api" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.455650 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-api" Dec 04 15:59:32 crc kubenswrapper[4878]: E1204 15:59:32.455674 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-log" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.455680 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-log" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.455932 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-api" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.455962 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5481178-e051-445f-bb17-64c74007dc15" containerName="nova-api-log" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.457272 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.460931 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.461065 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.462590 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.469601 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.529826 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c91204a7-98e8-4285-93f6-d6950295491c-logs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.529967 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-public-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.530184 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-config-data\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.530470 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.530518 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw992\" (UniqueName: \"kubernetes.io/projected/c91204a7-98e8-4285-93f6-d6950295491c-kube-api-access-hw992\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.530714 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633389 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633472 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c91204a7-98e8-4285-93f6-d6950295491c-logs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633509 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-public-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633603 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-config-data\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633717 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633754 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw992\" (UniqueName: \"kubernetes.io/projected/c91204a7-98e8-4285-93f6-d6950295491c-kube-api-access-hw992\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.633993 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c91204a7-98e8-4285-93f6-d6950295491c-logs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.637847 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.638300 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.639840 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-public-tls-certs\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.640394 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c91204a7-98e8-4285-93f6-d6950295491c-config-data\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.652747 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw992\" (UniqueName: \"kubernetes.io/projected/c91204a7-98e8-4285-93f6-d6950295491c-kube-api-access-hw992\") pod \"nova-api-0\" (UID: \"c91204a7-98e8-4285-93f6-d6950295491c\") " pod="openstack/nova-api-0" Dec 04 15:59:32 crc kubenswrapper[4878]: I1204 15:59:32.787681 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.193472 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5481178-e051-445f-bb17-64c74007dc15" path="/var/lib/kubelet/pods/b5481178-e051-445f-bb17-64c74007dc15/volumes" Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.194239 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b" path="/var/lib/kubelet/pods/c09e2e56-98f5-4ed7-8214-5d86c7e7ac3b/volumes" Dec 04 15:59:33 crc kubenswrapper[4878]: W1204 15:59:33.285406 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc91204a7_98e8_4285_93f6_d6950295491c.slice/crio-62d70951f2ccf15184776d78bfa8013aa5e39ac4e156ad78c65f1542c0b0cc2e WatchSource:0}: Error finding container 62d70951f2ccf15184776d78bfa8013aa5e39ac4e156ad78c65f1542c0b0cc2e: Status 404 returned error can't find the container with id 62d70951f2ccf15184776d78bfa8013aa5e39ac4e156ad78c65f1542c0b0cc2e Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.286822 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.359390 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546","Type":"ContainerStarted","Data":"20d75f553ef61f7d5d448fef59917bf6ac70ee48a138f8b5d21a1372ef924e04"} Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.359442 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b3a8cd5b-1c6a-4278-b66d-a0b0802e1546","Type":"ContainerStarted","Data":"4d9bc180a12d0dbfea819152638d88bd1d8e1251c6169224fc9ecc0ff7ef3af3"} Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.361133 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c91204a7-98e8-4285-93f6-d6950295491c","Type":"ContainerStarted","Data":"62d70951f2ccf15184776d78bfa8013aa5e39ac4e156ad78c65f1542c0b0cc2e"} Dec 04 15:59:33 crc kubenswrapper[4878]: I1204 15:59:33.386507 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.386479628 podStartE2EDuration="2.386479628s" podCreationTimestamp="2025-12-04 15:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:33.380862246 +0000 UTC m=+1417.343399202" watchObservedRunningTime="2025-12-04 15:59:33.386479628 +0000 UTC m=+1417.349016584" Dec 04 15:59:34 crc kubenswrapper[4878]: I1204 15:59:34.373695 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c91204a7-98e8-4285-93f6-d6950295491c","Type":"ContainerStarted","Data":"a8d8b04ee56d46e51256ec1bb26a4f9140d7c5f6c9794cad788fdbba985ee936"} Dec 04 15:59:34 crc kubenswrapper[4878]: I1204 15:59:34.374169 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c91204a7-98e8-4285-93f6-d6950295491c","Type":"ContainerStarted","Data":"9e3ab11eaf6d7102e9f78949d1c38ac5c093f470f5ae7cb10a5a785325d3a783"} Dec 04 15:59:34 crc kubenswrapper[4878]: I1204 15:59:34.397568 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.39753513 podStartE2EDuration="2.39753513s" podCreationTimestamp="2025-12-04 15:59:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 15:59:34.390521433 +0000 UTC m=+1418.353058429" watchObservedRunningTime="2025-12-04 15:59:34.39753513 +0000 UTC m=+1418.360072086" Dec 04 15:59:34 crc kubenswrapper[4878]: I1204 15:59:34.667273 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 04 15:59:36 crc kubenswrapper[4878]: I1204 15:59:36.783145 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:59:36 crc kubenswrapper[4878]: I1204 15:59:36.783223 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 04 15:59:39 crc kubenswrapper[4878]: I1204 15:59:39.667023 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 04 15:59:39 crc kubenswrapper[4878]: I1204 15:59:39.694249 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 04 15:59:40 crc kubenswrapper[4878]: I1204 15:59:40.485300 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 04 15:59:41 crc kubenswrapper[4878]: I1204 15:59:41.783978 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 15:59:41 crc kubenswrapper[4878]: I1204 15:59:41.784354 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 04 15:59:42 crc kubenswrapper[4878]: I1204 15:59:42.788264 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:59:42 crc kubenswrapper[4878]: I1204 15:59:42.789934 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 04 15:59:42 crc kubenswrapper[4878]: I1204 15:59:42.797231 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b3a8cd5b-1c6a-4278-b66d-a0b0802e1546" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:42 crc kubenswrapper[4878]: I1204 15:59:42.797348 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b3a8cd5b-1c6a-4278-b66d-a0b0802e1546" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:43 crc kubenswrapper[4878]: I1204 15:59:43.800083 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c91204a7-98e8-4285-93f6-d6950295491c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:43 crc kubenswrapper[4878]: I1204 15:59:43.800123 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c91204a7-98e8-4285-93f6-d6950295491c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.204:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.435663 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blfdc"] Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.438250 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.447801 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blfdc"] Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.581045 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.597158 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-catalog-content\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.597220 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vncdl\" (UniqueName: \"kubernetes.io/projected/8e4c8769-b454-4abd-b16c-42443ff77475-kube-api-access-vncdl\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.597368 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-utilities\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.770372 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-catalog-content\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.770493 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vncdl\" (UniqueName: \"kubernetes.io/projected/8e4c8769-b454-4abd-b16c-42443ff77475-kube-api-access-vncdl\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.783269 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-utilities\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.784182 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-utilities\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.790712 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4c8769-b454-4abd-b16c-42443ff77475-catalog-content\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:49 crc kubenswrapper[4878]: I1204 15:59:49.826083 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vncdl\" (UniqueName: \"kubernetes.io/projected/8e4c8769-b454-4abd-b16c-42443ff77475-kube-api-access-vncdl\") pod \"redhat-operators-blfdc\" (UID: \"8e4c8769-b454-4abd-b16c-42443ff77475\") " pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:50 crc kubenswrapper[4878]: I1204 15:59:50.070272 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 15:59:50 crc kubenswrapper[4878]: I1204 15:59:50.613601 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blfdc"] Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.581276 4878 generic.go:334] "Generic (PLEG): container finished" podID="8e4c8769-b454-4abd-b16c-42443ff77475" containerID="f3555ef10dfe76f576792b281437c98c973364df638b3be7364ad1d6cd2202d0" exitCode=0 Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.581484 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfdc" event={"ID":"8e4c8769-b454-4abd-b16c-42443ff77475","Type":"ContainerDied","Data":"f3555ef10dfe76f576792b281437c98c973364df638b3be7364ad1d6cd2202d0"} Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.581635 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfdc" event={"ID":"8e4c8769-b454-4abd-b16c-42443ff77475","Type":"ContainerStarted","Data":"afb7434127cb6f139312ddcd48dc9a86fbf90d4cefe99505b54607de6996ca1a"} Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.790902 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.794037 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 04 15:59:51 crc kubenswrapper[4878]: I1204 15:59:51.801655 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 15:59:52 crc kubenswrapper[4878]: I1204 15:59:52.602427 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 04 15:59:52 crc kubenswrapper[4878]: I1204 15:59:52.799599 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 15:59:52 crc kubenswrapper[4878]: I1204 15:59:52.800302 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 15:59:52 crc kubenswrapper[4878]: I1204 15:59:52.802351 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 04 15:59:52 crc kubenswrapper[4878]: I1204 15:59:52.808630 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 15:59:53 crc kubenswrapper[4878]: I1204 15:59:53.612163 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 04 15:59:53 crc kubenswrapper[4878]: I1204 15:59:53.622147 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.146287 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l"] Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.148176 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.150345 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.150458 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.157578 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l"] Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.250084 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.250146 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.250727 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7xln\" (UniqueName: \"kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.353760 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.353822 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.354023 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7xln\" (UniqueName: \"kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.354789 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.373383 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7xln\" (UniqueName: \"kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.379303 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume\") pod \"collect-profiles-29414400-z2j4l\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.481970 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.840593 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:00:00 crc kubenswrapper[4878]: I1204 16:00:00.840969 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:00:01 crc kubenswrapper[4878]: I1204 16:00:01.018945 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l"] Dec 04 16:00:01 crc kubenswrapper[4878]: W1204 16:00:01.019767 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod533a5341_d0d2_47cc_bb45_4fbfc2562452.slice/crio-00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74 WatchSource:0}: Error finding container 00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74: Status 404 returned error can't find the container with id 00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74 Dec 04 16:00:01 crc kubenswrapper[4878]: I1204 16:00:01.767712 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfdc" event={"ID":"8e4c8769-b454-4abd-b16c-42443ff77475","Type":"ContainerStarted","Data":"4e3eb4263cd5f988a7aeb0e6628b674c592b90094e4adef4ce1ee70c0833fd65"} Dec 04 16:00:01 crc kubenswrapper[4878]: I1204 16:00:01.769444 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" event={"ID":"533a5341-d0d2-47cc-bb45-4fbfc2562452","Type":"ContainerStarted","Data":"00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74"} Dec 04 16:00:03 crc kubenswrapper[4878]: I1204 16:00:03.404373 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:03 crc kubenswrapper[4878]: I1204 16:00:03.794189 4878 generic.go:334] "Generic (PLEG): container finished" podID="8e4c8769-b454-4abd-b16c-42443ff77475" containerID="4e3eb4263cd5f988a7aeb0e6628b674c592b90094e4adef4ce1ee70c0833fd65" exitCode=0 Dec 04 16:00:03 crc kubenswrapper[4878]: I1204 16:00:03.794253 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfdc" event={"ID":"8e4c8769-b454-4abd-b16c-42443ff77475","Type":"ContainerDied","Data":"4e3eb4263cd5f988a7aeb0e6628b674c592b90094e4adef4ce1ee70c0833fd65"} Dec 04 16:00:03 crc kubenswrapper[4878]: I1204 16:00:03.796517 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" event={"ID":"533a5341-d0d2-47cc-bb45-4fbfc2562452","Type":"ContainerStarted","Data":"68fcf586ad4511efb67ca92ad51a28f44d4568d9254d9199030e42f32104ec99"} Dec 04 16:00:03 crc kubenswrapper[4878]: I1204 16:00:03.835428 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" podStartSLOduration=3.835401985 podStartE2EDuration="3.835401985s" podCreationTimestamp="2025-12-04 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:00:03.833899737 +0000 UTC m=+1447.796436693" watchObservedRunningTime="2025-12-04 16:00:03.835401985 +0000 UTC m=+1447.797938941" Dec 04 16:00:04 crc kubenswrapper[4878]: I1204 16:00:04.776661 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:04 crc kubenswrapper[4878]: I1204 16:00:04.808445 4878 generic.go:334] "Generic (PLEG): container finished" podID="533a5341-d0d2-47cc-bb45-4fbfc2562452" containerID="68fcf586ad4511efb67ca92ad51a28f44d4568d9254d9199030e42f32104ec99" exitCode=0 Dec 04 16:00:04 crc kubenswrapper[4878]: I1204 16:00:04.808495 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" event={"ID":"533a5341-d0d2-47cc-bb45-4fbfc2562452","Type":"ContainerDied","Data":"68fcf586ad4511efb67ca92ad51a28f44d4568d9254d9199030e42f32104ec99"} Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.236530 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.387860 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume\") pod \"533a5341-d0d2-47cc-bb45-4fbfc2562452\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.388236 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume\") pod \"533a5341-d0d2-47cc-bb45-4fbfc2562452\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.388321 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7xln\" (UniqueName: \"kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln\") pod \"533a5341-d0d2-47cc-bb45-4fbfc2562452\" (UID: \"533a5341-d0d2-47cc-bb45-4fbfc2562452\") " Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.388908 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume" (OuterVolumeSpecName: "config-volume") pod "533a5341-d0d2-47cc-bb45-4fbfc2562452" (UID: "533a5341-d0d2-47cc-bb45-4fbfc2562452"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.394094 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "533a5341-d0d2-47cc-bb45-4fbfc2562452" (UID: "533a5341-d0d2-47cc-bb45-4fbfc2562452"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.394622 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln" (OuterVolumeSpecName: "kube-api-access-w7xln") pod "533a5341-d0d2-47cc-bb45-4fbfc2562452" (UID: "533a5341-d0d2-47cc-bb45-4fbfc2562452"). InnerVolumeSpecName "kube-api-access-w7xln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.490864 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/533a5341-d0d2-47cc-bb45-4fbfc2562452-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.490933 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7xln\" (UniqueName: \"kubernetes.io/projected/533a5341-d0d2-47cc-bb45-4fbfc2562452-kube-api-access-w7xln\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.490969 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/533a5341-d0d2-47cc-bb45-4fbfc2562452-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.852446 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" event={"ID":"533a5341-d0d2-47cc-bb45-4fbfc2562452","Type":"ContainerDied","Data":"00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74"} Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.852744 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00373d51ea3a5cd741d9a339073c7abd6731e049ae5b17df30870e02e454ca74" Dec 04 16:00:06 crc kubenswrapper[4878]: I1204 16:00:06.852513 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l" Dec 04 16:00:07 crc kubenswrapper[4878]: I1204 16:00:07.866559 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfdc" event={"ID":"8e4c8769-b454-4abd-b16c-42443ff77475","Type":"ContainerStarted","Data":"26d44fa341acd36544960dfd7b91c072533afb2d9902fac2cc3f6e29469bbb6b"} Dec 04 16:00:07 crc kubenswrapper[4878]: I1204 16:00:07.889386 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blfdc" podStartSLOduration=3.860526712 podStartE2EDuration="18.889364864s" podCreationTimestamp="2025-12-04 15:59:49 +0000 UTC" firstStartedPulling="2025-12-04 15:59:51.583965135 +0000 UTC m=+1435.546502091" lastFinishedPulling="2025-12-04 16:00:06.612803287 +0000 UTC m=+1450.575340243" observedRunningTime="2025-12-04 16:00:07.883803513 +0000 UTC m=+1451.846340489" watchObservedRunningTime="2025-12-04 16:00:07.889364864 +0000 UTC m=+1451.851901820" Dec 04 16:00:08 crc kubenswrapper[4878]: I1204 16:00:08.563386 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="rabbitmq" containerID="cri-o://697b76773e156792e6a95d3f9f134a20e04ccd0a91e6dbc0064ece01cdda3891" gracePeriod=604795 Dec 04 16:00:09 crc kubenswrapper[4878]: I1204 16:00:09.427698 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="rabbitmq" containerID="cri-o://af8f0d0429f9116de445491ff3e1f6a451a3a12b27b61b49708dd8def9270850" gracePeriod=604796 Dec 04 16:00:10 crc kubenswrapper[4878]: I1204 16:00:10.070581 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 16:00:10 crc kubenswrapper[4878]: I1204 16:00:10.070650 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 16:00:11 crc kubenswrapper[4878]: I1204 16:00:11.125007 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-blfdc" podUID="8e4c8769-b454-4abd-b16c-42443ff77475" containerName="registry-server" probeResult="failure" output=< Dec 04 16:00:11 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:00:11 crc kubenswrapper[4878]: > Dec 04 16:00:11 crc kubenswrapper[4878]: I1204 16:00:11.600467 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Dec 04 16:00:12 crc kubenswrapper[4878]: I1204 16:00:12.237636 4878 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 04 16:00:14 crc kubenswrapper[4878]: I1204 16:00:14.936711 4878 generic.go:334] "Generic (PLEG): container finished" podID="f17e1868-a868-47aa-8e98-e60203d8295f" containerID="697b76773e156792e6a95d3f9f134a20e04ccd0a91e6dbc0064ece01cdda3891" exitCode=0 Dec 04 16:00:14 crc kubenswrapper[4878]: I1204 16:00:14.936920 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerDied","Data":"697b76773e156792e6a95d3f9f134a20e04ccd0a91e6dbc0064ece01cdda3891"} Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.162935 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281651 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281719 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281748 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281856 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281941 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.281991 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2v6d\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.282026 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.282084 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.282136 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.282166 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.282209 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret\") pod \"f17e1868-a868-47aa-8e98-e60203d8295f\" (UID: \"f17e1868-a868-47aa-8e98-e60203d8295f\") " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.284518 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.285041 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.293169 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.293383 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.299161 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info" (OuterVolumeSpecName: "pod-info") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.302543 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d" (OuterVolumeSpecName: "kube-api-access-k2v6d") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "kube-api-access-k2v6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.310370 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.314070 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.341479 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data" (OuterVolumeSpecName: "config-data") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.374406 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf" (OuterVolumeSpecName: "server-conf") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390583 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2v6d\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-kube-api-access-k2v6d\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390613 4878 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f17e1868-a868-47aa-8e98-e60203d8295f-pod-info\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390624 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390634 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390940 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390975 4878 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f17e1868-a868-47aa-8e98-e60203d8295f-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390988 4878 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.390996 4878 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-server-conf\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.391005 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.391013 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f17e1868-a868-47aa-8e98-e60203d8295f-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.428053 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.456436 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f17e1868-a868-47aa-8e98-e60203d8295f" (UID: "f17e1868-a868-47aa-8e98-e60203d8295f"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.493060 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f17e1868-a868-47aa-8e98-e60203d8295f-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.493094 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.953302 4878 generic.go:334] "Generic (PLEG): container finished" podID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerID="af8f0d0429f9116de445491ff3e1f6a451a3a12b27b61b49708dd8def9270850" exitCode=0 Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.953385 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerDied","Data":"af8f0d0429f9116de445491ff3e1f6a451a3a12b27b61b49708dd8def9270850"} Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.953797 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"2b85c4bb-73ad-4002-85b3-46a1f83cd326","Type":"ContainerDied","Data":"8a4d08a36e64c2d8389be5d832bc44f66ebfe38611218cdd07d13a5c21a09591"} Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.953818 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a4d08a36e64c2d8389be5d832bc44f66ebfe38611218cdd07d13a5c21a09591" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.956278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f17e1868-a868-47aa-8e98-e60203d8295f","Type":"ContainerDied","Data":"cc9fe43b48f3a5aff6c14be35ab3fd86c87a35d8a4bdabe9db6282860e0c1095"} Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.956342 4878 scope.go:117] "RemoveContainer" containerID="697b76773e156792e6a95d3f9f134a20e04ccd0a91e6dbc0064ece01cdda3891" Dec 04 16:00:15 crc kubenswrapper[4878]: I1204 16:00:15.956489 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.048572 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.062978 4878 scope.go:117] "RemoveContainer" containerID="aa30f83cdc9e3faac13e73e9ab0e7b955edf28e4e8f407845d5f126ae6eafd7f" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.072293 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.085726 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.151796 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.157167 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161011 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161080 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161198 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161247 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161297 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161349 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161427 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161479 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161553 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.161613 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rqnr\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr\") pod \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\" (UID: \"2b85c4bb-73ad-4002-85b3-46a1f83cd326\") " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.167120 4878 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.171460 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.172325 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.292300 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.292344 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.314242 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr" (OuterVolumeSpecName: "kube-api-access-6rqnr") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "kube-api-access-6rqnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.314410 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "persistence") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.327789 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info" (OuterVolumeSpecName: "pod-info") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.328004 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.328524 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.357024 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:16 crc kubenswrapper[4878]: E1204 16:00:16.358158 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358238 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: E1204 16:00:16.358266 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533a5341-d0d2-47cc-bb45-4fbfc2562452" containerName="collect-profiles" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358274 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="533a5341-d0d2-47cc-bb45-4fbfc2562452" containerName="collect-profiles" Dec 04 16:00:16 crc kubenswrapper[4878]: E1204 16:00:16.358322 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358332 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: E1204 16:00:16.358350 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="setup-container" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358360 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="setup-container" Dec 04 16:00:16 crc kubenswrapper[4878]: E1204 16:00:16.358377 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="setup-container" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358387 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="setup-container" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358866 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358922 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="533a5341-d0d2-47cc-bb45-4fbfc2562452" containerName="collect-profiles" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.358949 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" containerName="rabbitmq" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.387967 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.393562 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.393775 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.393943 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rbjnw" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.394109 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.394211 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.393687 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.394452 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.396252 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.396358 4878 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/2b85c4bb-73ad-4002-85b3-46a1f83cd326-pod-info\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.396425 4878 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/2b85c4bb-73ad-4002-85b3-46a1f83cd326-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.396487 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rqnr\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-kube-api-access-6rqnr\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.396564 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.433837 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data" (OuterVolumeSpecName: "config-data") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.433863 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.457130 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497590 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-server-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497633 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497666 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497690 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-config-data\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497717 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/976d4c5a-fb7f-4f01-8d0d-527a87639c33-pod-info\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497735 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/976d4c5a-fb7f-4f01-8d0d-527a87639c33-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497770 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hp8\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-kube-api-access-68hp8\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.497965 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.498145 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.498305 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.498342 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.498544 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.498562 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.516449 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf" (OuterVolumeSpecName: "server-conf") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.547383 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "2b85c4bb-73ad-4002-85b3-46a1f83cd326" (UID: "2b85c4bb-73ad-4002-85b3-46a1f83cd326"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600212 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600312 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600350 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600405 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-server-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600429 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600456 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600497 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-config-data\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600541 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/976d4c5a-fb7f-4f01-8d0d-527a87639c33-pod-info\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600561 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/976d4c5a-fb7f-4f01-8d0d-527a87639c33-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600631 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hp8\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-kube-api-access-68hp8\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600665 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.600754 4878 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/2b85c4bb-73ad-4002-85b3-46a1f83cd326-server-conf\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.601253 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.601289 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.601460 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.601514 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.602460 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-config-data\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.602551 4878 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/2b85c4bb-73ad-4002-85b3-46a1f83cd326-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.602569 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/976d4c5a-fb7f-4f01-8d0d-527a87639c33-server-conf\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.604913 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.606681 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/976d4c5a-fb7f-4f01-8d0d-527a87639c33-pod-info\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.607261 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/976d4c5a-fb7f-4f01-8d0d-527a87639c33-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.611993 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.621452 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hp8\" (UniqueName: \"kubernetes.io/projected/976d4c5a-fb7f-4f01-8d0d-527a87639c33-kube-api-access-68hp8\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.642833 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-server-0\" (UID: \"976d4c5a-fb7f-4f01-8d0d-527a87639c33\") " pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.752023 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 04 16:00:16 crc kubenswrapper[4878]: I1204 16:00:16.978200 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.022441 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.032710 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.052943 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.062103 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.068830 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069217 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069343 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069374 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069544 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069627 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.069687 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-n7tw8" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.085760 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117214 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117269 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9686eee-f63a-40e8-a8a6-fe5901d0888c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117322 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117422 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117473 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117493 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9686eee-f63a-40e8-a8a6-fe5901d0888c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117578 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117660 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f26ct\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-kube-api-access-f26ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117681 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117715 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.117738 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.191977 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b85c4bb-73ad-4002-85b3-46a1f83cd326" path="/var/lib/kubelet/pods/2b85c4bb-73ad-4002-85b3-46a1f83cd326/volumes" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.193079 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17e1868-a868-47aa-8e98-e60203d8295f" path="/var/lib/kubelet/pods/f17e1868-a868-47aa-8e98-e60203d8295f/volumes" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.219620 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f26ct\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-kube-api-access-f26ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.219666 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.219713 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.219736 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220047 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220119 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9686eee-f63a-40e8-a8a6-fe5901d0888c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220205 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220483 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220509 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220605 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220642 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9686eee-f63a-40e8-a8a6-fe5901d0888c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220795 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.220947 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.221180 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.221567 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.221804 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.222281 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9686eee-f63a-40e8-a8a6-fe5901d0888c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.227034 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.227056 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9686eee-f63a-40e8-a8a6-fe5901d0888c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.227540 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.232179 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9686eee-f63a-40e8-a8a6-fe5901d0888c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.252279 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f26ct\" (UniqueName: \"kubernetes.io/projected/c9686eee-f63a-40e8-a8a6-fe5901d0888c-kube-api-access-f26ct\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.267200 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.275188 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9686eee-f63a-40e8-a8a6-fe5901d0888c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.398312 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:17 crc kubenswrapper[4878]: W1204 16:00:17.870641 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9686eee_f63a_40e8_a8a6_fe5901d0888c.slice/crio-f5dcfa7af88b2e10d2e33b27dfb59dac8597721983bd7429c656e8fa66edbd3f WatchSource:0}: Error finding container f5dcfa7af88b2e10d2e33b27dfb59dac8597721983bd7429c656e8fa66edbd3f: Status 404 returned error can't find the container with id f5dcfa7af88b2e10d2e33b27dfb59dac8597721983bd7429c656e8fa66edbd3f Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.877552 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.993674 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"976d4c5a-fb7f-4f01-8d0d-527a87639c33","Type":"ContainerStarted","Data":"69784b64e4c595781647eb2971c02f1222b6450206402b8791f57547949b84be"} Dec 04 16:00:17 crc kubenswrapper[4878]: I1204 16:00:17.996149 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9686eee-f63a-40e8-a8a6-fe5901d0888c","Type":"ContainerStarted","Data":"f5dcfa7af88b2e10d2e33b27dfb59dac8597721983bd7429c656e8fa66edbd3f"} Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.217153 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.220589 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.223436 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.228967 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.371893 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppw4p\" (UniqueName: \"kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372043 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372143 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372262 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372303 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372353 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.372386 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.475684 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.474384 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.476155 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.476811 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.477572 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.476932 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.477743 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.478712 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.478792 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.479448 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.479774 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppw4p\" (UniqueName: \"kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.480344 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.481057 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.499912 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppw4p\" (UniqueName: \"kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p\") pod \"dnsmasq-dns-79bd4cc8c9-gm8p4\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:19 crc kubenswrapper[4878]: I1204 16:00:19.551588 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.020191 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"976d4c5a-fb7f-4f01-8d0d-527a87639c33","Type":"ContainerStarted","Data":"9eb9c5716af6847faa4c652914dcb59ee5d5145c0ad4bf97c514077211789670"} Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.022044 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9686eee-f63a-40e8-a8a6-fe5901d0888c","Type":"ContainerStarted","Data":"5ee696b7b68e425b5066ac6209d3d4249e437c6b1e930ee94ed428afb5727ebd"} Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.129667 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.187272 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blfdc" Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.202274 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:20 crc kubenswrapper[4878]: W1204 16:00:20.212716 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod421d4cec_296d_4809_ab7b_4261bbf6ebda.slice/crio-06038ce31f32f3f420dd2bcbc6c2d20811cd555c286b61e5458860e428015de1 WatchSource:0}: Error finding container 06038ce31f32f3f420dd2bcbc6c2d20811cd555c286b61e5458860e428015de1: Status 404 returned error can't find the container with id 06038ce31f32f3f420dd2bcbc6c2d20811cd555c286b61e5458860e428015de1 Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.490954 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blfdc"] Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.638271 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 16:00:20 crc kubenswrapper[4878]: I1204 16:00:20.638594 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7pkwt" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="registry-server" containerID="cri-o://d4e1e66cc94751c9e19616128b1a5eaff1dbf026d807c829448db8961e9faee3" gracePeriod=2 Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.058747 4878 generic.go:334] "Generic (PLEG): container finished" podID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerID="9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469" exitCode=0 Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.059701 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" event={"ID":"421d4cec-296d-4809-ab7b-4261bbf6ebda","Type":"ContainerDied","Data":"9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469"} Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.060008 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" event={"ID":"421d4cec-296d-4809-ab7b-4261bbf6ebda","Type":"ContainerStarted","Data":"06038ce31f32f3f420dd2bcbc6c2d20811cd555c286b61e5458860e428015de1"} Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.079301 4878 generic.go:334] "Generic (PLEG): container finished" podID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerID="d4e1e66cc94751c9e19616128b1a5eaff1dbf026d807c829448db8961e9faee3" exitCode=0 Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.079372 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerDied","Data":"d4e1e66cc94751c9e19616128b1a5eaff1dbf026d807c829448db8961e9faee3"} Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.207258 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.320550 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities\") pod \"605a791d-cbd5-4a04-b896-c580ba3438fc\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.320761 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content\") pod \"605a791d-cbd5-4a04-b896-c580ba3438fc\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.320788 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58rwg\" (UniqueName: \"kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg\") pod \"605a791d-cbd5-4a04-b896-c580ba3438fc\" (UID: \"605a791d-cbd5-4a04-b896-c580ba3438fc\") " Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.321125 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities" (OuterVolumeSpecName: "utilities") pod "605a791d-cbd5-4a04-b896-c580ba3438fc" (UID: "605a791d-cbd5-4a04-b896-c580ba3438fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.321404 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.325852 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg" (OuterVolumeSpecName: "kube-api-access-58rwg") pod "605a791d-cbd5-4a04-b896-c580ba3438fc" (UID: "605a791d-cbd5-4a04-b896-c580ba3438fc"). InnerVolumeSpecName "kube-api-access-58rwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.424061 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58rwg\" (UniqueName: \"kubernetes.io/projected/605a791d-cbd5-4a04-b896-c580ba3438fc-kube-api-access-58rwg\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.425217 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "605a791d-cbd5-4a04-b896-c580ba3438fc" (UID: "605a791d-cbd5-4a04-b896-c580ba3438fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:00:21 crc kubenswrapper[4878]: I1204 16:00:21.525861 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/605a791d-cbd5-4a04-b896-c580ba3438fc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.090763 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" event={"ID":"421d4cec-296d-4809-ab7b-4261bbf6ebda","Type":"ContainerStarted","Data":"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878"} Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.092596 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.094712 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7pkwt" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.095729 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7pkwt" event={"ID":"605a791d-cbd5-4a04-b896-c580ba3438fc","Type":"ContainerDied","Data":"495330b7f1b83d60e9f9c7fcab4929c3383f2426c8b7822764de8c8744c2dfdd"} Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.095775 4878 scope.go:117] "RemoveContainer" containerID="d4e1e66cc94751c9e19616128b1a5eaff1dbf026d807c829448db8961e9faee3" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.119088 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" podStartSLOduration=3.119059191 podStartE2EDuration="3.119059191s" podCreationTimestamp="2025-12-04 16:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:00:22.114976598 +0000 UTC m=+1466.077513554" watchObservedRunningTime="2025-12-04 16:00:22.119059191 +0000 UTC m=+1466.081596147" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.140958 4878 scope.go:117] "RemoveContainer" containerID="36d6e988002c0e46e1e59b95a7f0104314f8bdf018c6c65d96c7ab02a2c81833" Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.142627 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.157073 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7pkwt"] Dec 04 16:00:22 crc kubenswrapper[4878]: I1204 16:00:22.184238 4878 scope.go:117] "RemoveContainer" containerID="b2efefdcc12e9ac5475ff031485463394b657aa54862f7d489169fdfbb1b9bcd" Dec 04 16:00:23 crc kubenswrapper[4878]: I1204 16:00:23.192818 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" path="/var/lib/kubelet/pods/605a791d-cbd5-4a04-b896-c580ba3438fc/volumes" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.553947 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.633363 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.633689 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="dnsmasq-dns" containerID="cri-o://93b0c37db41560dc9e1f3aabe07f56c095621d8dcc83f3828920f0a16eb17a5b" gracePeriod=10 Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.848407 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-ttk4q"] Dec 04 16:00:29 crc kubenswrapper[4878]: E1204 16:00:29.848890 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="extract-utilities" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.848908 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="extract-utilities" Dec 04 16:00:29 crc kubenswrapper[4878]: E1204 16:00:29.848952 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="registry-server" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.848960 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="registry-server" Dec 04 16:00:29 crc kubenswrapper[4878]: E1204 16:00:29.848984 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="extract-content" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.848991 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="extract-content" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.849194 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a791d-cbd5-4a04-b896-c580ba3438fc" containerName="registry-server" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.850394 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.901805 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtsrr\" (UniqueName: \"kubernetes.io/projected/cb2cb23f-6f8d-43f2-a251-35f680844694-kube-api-access-mtsrr\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.901855 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.901963 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-config\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.902115 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-svc\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.902174 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.902297 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.902405 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:29 crc kubenswrapper[4878]: I1204 16:00:29.912659 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-ttk4q"] Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005512 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-svc\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005597 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005662 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005725 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005847 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005892 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtsrr\" (UniqueName: \"kubernetes.io/projected/cb2cb23f-6f8d-43f2-a251-35f680844694-kube-api-access-mtsrr\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.005935 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-config\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.007101 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.007140 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-config\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.007745 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-dns-svc\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.007926 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.008417 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.008642 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cb2cb23f-6f8d-43f2-a251-35f680844694-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.030444 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtsrr\" (UniqueName: \"kubernetes.io/projected/cb2cb23f-6f8d-43f2-a251-35f680844694-kube-api-access-mtsrr\") pod \"dnsmasq-dns-55478c4467-ttk4q\" (UID: \"cb2cb23f-6f8d-43f2-a251-35f680844694\") " pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.182818 4878 generic.go:334] "Generic (PLEG): container finished" podID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerID="93b0c37db41560dc9e1f3aabe07f56c095621d8dcc83f3828920f0a16eb17a5b" exitCode=0 Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.182885 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" event={"ID":"7119b41d-07a7-4b01-8a58-5b67479d095f","Type":"ContainerDied","Data":"93b0c37db41560dc9e1f3aabe07f56c095621d8dcc83f3828920f0a16eb17a5b"} Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.182920 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" event={"ID":"7119b41d-07a7-4b01-8a58-5b67479d095f","Type":"ContainerDied","Data":"bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda"} Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.182932 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bade23fa08c67c7082844f7d9edd643bc904d78677ca450511c16f6de4f77cda" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.197892 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.208618 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312631 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312730 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312776 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312803 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312950 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.312990 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g594x\" (UniqueName: \"kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x\") pod \"7119b41d-07a7-4b01-8a58-5b67479d095f\" (UID: \"7119b41d-07a7-4b01-8a58-5b67479d095f\") " Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.318974 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x" (OuterVolumeSpecName: "kube-api-access-g594x") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "kube-api-access-g594x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.380659 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config" (OuterVolumeSpecName: "config") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.387275 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.387408 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.399857 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.415759 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-config\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.415795 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.415811 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.415822 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g594x\" (UniqueName: \"kubernetes.io/projected/7119b41d-07a7-4b01-8a58-5b67479d095f-kube-api-access-g594x\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.415837 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.439108 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7119b41d-07a7-4b01-8a58-5b67479d095f" (UID: "7119b41d-07a7-4b01-8a58-5b67479d095f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.517719 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7119b41d-07a7-4b01-8a58-5b67479d095f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.698424 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-ttk4q"] Dec 04 16:00:30 crc kubenswrapper[4878]: W1204 16:00:30.699944 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb2cb23f_6f8d_43f2_a251_35f680844694.slice/crio-179d1e37d627a45d2423631f2bff524e1117930e60b99aa050472d6d52775635 WatchSource:0}: Error finding container 179d1e37d627a45d2423631f2bff524e1117930e60b99aa050472d6d52775635: Status 404 returned error can't find the container with id 179d1e37d627a45d2423631f2bff524e1117930e60b99aa050472d6d52775635 Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.840813 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.841371 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.841488 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.842395 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:00:30 crc kubenswrapper[4878]: I1204 16:00:30.842559 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754" gracePeriod=600 Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.216576 4878 generic.go:334] "Generic (PLEG): container finished" podID="cb2cb23f-6f8d-43f2-a251-35f680844694" containerID="713cfd771ba75dd64e8a41ab0ebe915567d05206a9e274b4bc73b2f070e7928f" exitCode=0 Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.216785 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" event={"ID":"cb2cb23f-6f8d-43f2-a251-35f680844694","Type":"ContainerDied","Data":"713cfd771ba75dd64e8a41ab0ebe915567d05206a9e274b4bc73b2f070e7928f"} Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.216978 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" event={"ID":"cb2cb23f-6f8d-43f2-a251-35f680844694","Type":"ContainerStarted","Data":"179d1e37d627a45d2423631f2bff524e1117930e60b99aa050472d6d52775635"} Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.220528 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754" exitCode=0 Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.220674 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-6ngd6" Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.220675 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754"} Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.220741 4878 scope.go:117] "RemoveContainer" containerID="2ce89844f12ad0014470ec73950bdad107de7be05fbc862a4ec63ed384618b0a" Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.329132 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 16:00:31 crc kubenswrapper[4878]: I1204 16:00:31.339631 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-6ngd6"] Dec 04 16:00:32 crc kubenswrapper[4878]: I1204 16:00:32.295947 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" event={"ID":"cb2cb23f-6f8d-43f2-a251-35f680844694","Type":"ContainerStarted","Data":"c1d89e1ef6a119abe45dec0b38bc282aa5fd73e131096cbd60b7c335804ddba9"} Dec 04 16:00:32 crc kubenswrapper[4878]: I1204 16:00:32.298002 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:32 crc kubenswrapper[4878]: I1204 16:00:32.308575 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46"} Dec 04 16:00:32 crc kubenswrapper[4878]: I1204 16:00:32.336952 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" podStartSLOduration=3.336922946 podStartE2EDuration="3.336922946s" podCreationTimestamp="2025-12-04 16:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:00:32.332285569 +0000 UTC m=+1476.294822535" watchObservedRunningTime="2025-12-04 16:00:32.336922946 +0000 UTC m=+1476.299459902" Dec 04 16:00:33 crc kubenswrapper[4878]: I1204 16:00:33.193572 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" path="/var/lib/kubelet/pods/7119b41d-07a7-4b01-8a58-5b67479d095f/volumes" Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.210953 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-ttk4q" Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.291605 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.292365 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="dnsmasq-dns" containerID="cri-o://8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878" gracePeriod=10 Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.805044 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957144 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957302 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957340 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppw4p\" (UniqueName: \"kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957457 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957487 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957594 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.957773 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam\") pod \"421d4cec-296d-4809-ab7b-4261bbf6ebda\" (UID: \"421d4cec-296d-4809-ab7b-4261bbf6ebda\") " Dec 04 16:00:40 crc kubenswrapper[4878]: I1204 16:00:40.962916 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p" (OuterVolumeSpecName: "kube-api-access-ppw4p") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "kube-api-access-ppw4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.014707 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.015461 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config" (OuterVolumeSpecName: "config") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.020162 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.023844 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.024122 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.029666 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "421d4cec-296d-4809-ab7b-4261bbf6ebda" (UID: "421d4cec-296d-4809-ab7b-4261bbf6ebda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060601 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060651 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060662 4878 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060679 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppw4p\" (UniqueName: \"kubernetes.io/projected/421d4cec-296d-4809-ab7b-4261bbf6ebda-kube-api-access-ppw4p\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060697 4878 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060710 4878 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.060722 4878 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/421d4cec-296d-4809-ab7b-4261bbf6ebda-config\") on node \"crc\" DevicePath \"\"" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.392332 4878 generic.go:334] "Generic (PLEG): container finished" podID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerID="8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878" exitCode=0 Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.392393 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" event={"ID":"421d4cec-296d-4809-ab7b-4261bbf6ebda","Type":"ContainerDied","Data":"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878"} Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.392436 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" event={"ID":"421d4cec-296d-4809-ab7b-4261bbf6ebda","Type":"ContainerDied","Data":"06038ce31f32f3f420dd2bcbc6c2d20811cd555c286b61e5458860e428015de1"} Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.392462 4878 scope.go:117] "RemoveContainer" containerID="8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.392523 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-gm8p4" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.421677 4878 scope.go:117] "RemoveContainer" containerID="9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.427604 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.437104 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-gm8p4"] Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.474297 4878 scope.go:117] "RemoveContainer" containerID="8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878" Dec 04 16:00:41 crc kubenswrapper[4878]: E1204 16:00:41.474818 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878\": container with ID starting with 8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878 not found: ID does not exist" containerID="8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.474857 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878"} err="failed to get container status \"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878\": rpc error: code = NotFound desc = could not find container \"8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878\": container with ID starting with 8f69a688c67728d8f378a02d7ca4820c9f29d21a4abb912145e0c6fa205df878 not found: ID does not exist" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.474902 4878 scope.go:117] "RemoveContainer" containerID="9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469" Dec 04 16:00:41 crc kubenswrapper[4878]: E1204 16:00:41.475926 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469\": container with ID starting with 9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469 not found: ID does not exist" containerID="9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469" Dec 04 16:00:41 crc kubenswrapper[4878]: I1204 16:00:41.475956 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469"} err="failed to get container status \"9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469\": rpc error: code = NotFound desc = could not find container \"9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469\": container with ID starting with 9795725c5c99732fb4709267723160913f0722484e01a9fed40b79eff1d54469 not found: ID does not exist" Dec 04 16:00:43 crc kubenswrapper[4878]: I1204 16:00:43.190759 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" path="/var/lib/kubelet/pods/421d4cec-296d-4809-ab7b-4261bbf6ebda/volumes" Dec 04 16:00:51 crc kubenswrapper[4878]: I1204 16:00:51.512846 4878 generic.go:334] "Generic (PLEG): container finished" podID="976d4c5a-fb7f-4f01-8d0d-527a87639c33" containerID="9eb9c5716af6847faa4c652914dcb59ee5d5145c0ad4bf97c514077211789670" exitCode=0 Dec 04 16:00:51 crc kubenswrapper[4878]: I1204 16:00:51.512992 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"976d4c5a-fb7f-4f01-8d0d-527a87639c33","Type":"ContainerDied","Data":"9eb9c5716af6847faa4c652914dcb59ee5d5145c0ad4bf97c514077211789670"} Dec 04 16:00:52 crc kubenswrapper[4878]: I1204 16:00:52.527518 4878 generic.go:334] "Generic (PLEG): container finished" podID="c9686eee-f63a-40e8-a8a6-fe5901d0888c" containerID="5ee696b7b68e425b5066ac6209d3d4249e437c6b1e930ee94ed428afb5727ebd" exitCode=0 Dec 04 16:00:52 crc kubenswrapper[4878]: I1204 16:00:52.527606 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9686eee-f63a-40e8-a8a6-fe5901d0888c","Type":"ContainerDied","Data":"5ee696b7b68e425b5066ac6209d3d4249e437c6b1e930ee94ed428afb5727ebd"} Dec 04 16:00:52 crc kubenswrapper[4878]: I1204 16:00:52.530676 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"976d4c5a-fb7f-4f01-8d0d-527a87639c33","Type":"ContainerStarted","Data":"46256e6bd80d008708d3b52dc07b1670510bfb310ac2cd001e4f05f08ca48d0a"} Dec 04 16:00:52 crc kubenswrapper[4878]: I1204 16:00:52.530915 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 04 16:00:52 crc kubenswrapper[4878]: I1204 16:00:52.592793 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.592766216 podStartE2EDuration="36.592766216s" podCreationTimestamp="2025-12-04 16:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:00:52.583762259 +0000 UTC m=+1496.546299215" watchObservedRunningTime="2025-12-04 16:00:52.592766216 +0000 UTC m=+1496.555303172" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.543760 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9686eee-f63a-40e8-a8a6-fe5901d0888c","Type":"ContainerStarted","Data":"8b148646cf354f7a46e02281a7de431dbf6c2fea88b85ee2810456b4af09c9d5"} Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.544348 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.574441 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.574418236 podStartE2EDuration="36.574418236s" podCreationTimestamp="2025-12-04 16:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:00:53.565447369 +0000 UTC m=+1497.527984335" watchObservedRunningTime="2025-12-04 16:00:53.574418236 +0000 UTC m=+1497.536955192" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.612305 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7"] Dec 04 16:00:53 crc kubenswrapper[4878]: E1204 16:00:53.612915 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="init" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.612936 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="init" Dec 04 16:00:53 crc kubenswrapper[4878]: E1204 16:00:53.612958 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.612966 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: E1204 16:00:53.612981 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="init" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.612993 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="init" Dec 04 16:00:53 crc kubenswrapper[4878]: E1204 16:00:53.613012 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.613019 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.613348 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7119b41d-07a7-4b01-8a58-5b67479d095f" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.613367 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="421d4cec-296d-4809-ab7b-4261bbf6ebda" containerName="dnsmasq-dns" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.615654 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.617959 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.618170 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.619576 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.619653 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.624982 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7"] Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.750079 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.750131 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.750333 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4tf9\" (UniqueName: \"kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.750496 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.852893 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.853537 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.853662 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.853791 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4tf9\" (UniqueName: \"kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.858088 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.859661 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.863576 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.876009 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4tf9\" (UniqueName: \"kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:53 crc kubenswrapper[4878]: I1204 16:00:53.946378 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:00:54 crc kubenswrapper[4878]: W1204 16:00:54.503006 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fec4d01_1d56_4db6_ac76_cb8e2b62a659.slice/crio-16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2 WatchSource:0}: Error finding container 16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2: Status 404 returned error can't find the container with id 16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2 Dec 04 16:00:54 crc kubenswrapper[4878]: I1204 16:00:54.504051 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7"] Dec 04 16:00:54 crc kubenswrapper[4878]: I1204 16:00:54.555030 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" event={"ID":"5fec4d01-1d56-4db6-ac76-cb8e2b62a659","Type":"ContainerStarted","Data":"16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2"} Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.141074 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29414401-99rqr"] Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.142716 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.156768 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29414401-99rqr"] Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.319268 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.320066 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.320311 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whsql\" (UniqueName: \"kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.320457 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.422568 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.422651 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whsql\" (UniqueName: \"kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.422728 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.422798 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.430570 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.432502 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.436171 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.443130 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whsql\" (UniqueName: \"kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql\") pod \"keystone-cron-29414401-99rqr\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:00 crc kubenswrapper[4878]: I1204 16:01:00.460838 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:01 crc kubenswrapper[4878]: I1204 16:01:01.025358 4878 scope.go:117] "RemoveContainer" containerID="8d304619dda8e5845b94774d4bb834b7688f3f9bcf47949d9acfbbddc368b96c" Dec 04 16:01:01 crc kubenswrapper[4878]: I1204 16:01:01.291168 4878 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod7119b41d-07a7-4b01-8a58-5b67479d095f"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod7119b41d-07a7-4b01-8a58-5b67479d095f] : Timed out while waiting for systemd to remove kubepods-besteffort-pod7119b41d_07a7_4b01_8a58_5b67479d095f.slice" Dec 04 16:01:03 crc kubenswrapper[4878]: I1204 16:01:03.776176 4878 scope.go:117] "RemoveContainer" containerID="e92152e572870aaebeb8c9f7a1ceb576d574fd9f0c0c2b871d6ac359655a8c97" Dec 04 16:01:03 crc kubenswrapper[4878]: I1204 16:01:03.841941 4878 scope.go:117] "RemoveContainer" containerID="28a9e7c397ffa4fdd05da38ec32cf6abd29d30987a52ae642e1627d10a7f2fc0" Dec 04 16:01:03 crc kubenswrapper[4878]: I1204 16:01:03.897685 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:01:04 crc kubenswrapper[4878]: W1204 16:01:04.282034 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d9e8d0_f3a1_4a2b_8815_66bee6417b5c.slice/crio-ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b WatchSource:0}: Error finding container ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b: Status 404 returned error can't find the container with id ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.284177 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29414401-99rqr"] Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.661186 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" event={"ID":"5fec4d01-1d56-4db6-ac76-cb8e2b62a659","Type":"ContainerStarted","Data":"1d219b944e9c338276d9214d21dd20dc07367d215a586dd68276282ab128b501"} Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.663383 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29414401-99rqr" event={"ID":"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c","Type":"ContainerStarted","Data":"9b367be53436992db974e420076cf56ec033aa47b5dad37192fe1fbcaba63ae6"} Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.663516 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29414401-99rqr" event={"ID":"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c","Type":"ContainerStarted","Data":"ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b"} Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.684593 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" podStartSLOduration=2.296383554 podStartE2EDuration="11.684570211s" podCreationTimestamp="2025-12-04 16:00:53 +0000 UTC" firstStartedPulling="2025-12-04 16:00:54.505646321 +0000 UTC m=+1498.468183277" lastFinishedPulling="2025-12-04 16:01:03.893832978 +0000 UTC m=+1507.856369934" observedRunningTime="2025-12-04 16:01:04.679135503 +0000 UTC m=+1508.641672459" watchObservedRunningTime="2025-12-04 16:01:04.684570211 +0000 UTC m=+1508.647107167" Dec 04 16:01:04 crc kubenswrapper[4878]: I1204 16:01:04.706577 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29414401-99rqr" podStartSLOduration=4.706556447 podStartE2EDuration="4.706556447s" podCreationTimestamp="2025-12-04 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:01:04.695528278 +0000 UTC m=+1508.658065254" watchObservedRunningTime="2025-12-04 16:01:04.706556447 +0000 UTC m=+1508.669093403" Dec 04 16:01:06 crc kubenswrapper[4878]: I1204 16:01:06.770151 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 04 16:01:07 crc kubenswrapper[4878]: I1204 16:01:07.403138 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 04 16:01:07 crc kubenswrapper[4878]: I1204 16:01:07.692800 4878 generic.go:334] "Generic (PLEG): container finished" podID="e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" containerID="9b367be53436992db974e420076cf56ec033aa47b5dad37192fe1fbcaba63ae6" exitCode=0 Dec 04 16:01:07 crc kubenswrapper[4878]: I1204 16:01:07.692863 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29414401-99rqr" event={"ID":"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c","Type":"ContainerDied","Data":"9b367be53436992db974e420076cf56ec033aa47b5dad37192fe1fbcaba63ae6"} Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.014646 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.145003 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whsql\" (UniqueName: \"kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql\") pod \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.145178 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle\") pod \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.148119 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data\") pod \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.148468 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys\") pod \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\" (UID: \"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c\") " Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.152393 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql" (OuterVolumeSpecName: "kube-api-access-whsql") pod "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" (UID: "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c"). InnerVolumeSpecName "kube-api-access-whsql". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.152950 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" (UID: "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.178077 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" (UID: "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.216300 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data" (OuterVolumeSpecName: "config-data") pod "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" (UID: "e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.251739 4878 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.251780 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whsql\" (UniqueName: \"kubernetes.io/projected/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-kube-api-access-whsql\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.251796 4878 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.251809 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.722948 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29414401-99rqr" event={"ID":"e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c","Type":"ContainerDied","Data":"ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b"} Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.723006 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed76d13ac2696f9d4dff77708e81dfdbf04ca63acdc23ea93a12ef21b63d8e3b" Dec 04 16:01:09 crc kubenswrapper[4878]: I1204 16:01:09.723064 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29414401-99rqr" Dec 04 16:01:15 crc kubenswrapper[4878]: I1204 16:01:15.784467 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fec4d01-1d56-4db6-ac76-cb8e2b62a659" containerID="1d219b944e9c338276d9214d21dd20dc07367d215a586dd68276282ab128b501" exitCode=0 Dec 04 16:01:15 crc kubenswrapper[4878]: I1204 16:01:15.784547 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" event={"ID":"5fec4d01-1d56-4db6-ac76-cb8e2b62a659","Type":"ContainerDied","Data":"1d219b944e9c338276d9214d21dd20dc07367d215a586dd68276282ab128b501"} Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.277098 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.383280 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key\") pod \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.383418 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4tf9\" (UniqueName: \"kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9\") pod \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.383482 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory\") pod \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.383547 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle\") pod \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\" (UID: \"5fec4d01-1d56-4db6-ac76-cb8e2b62a659\") " Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.390041 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "5fec4d01-1d56-4db6-ac76-cb8e2b62a659" (UID: "5fec4d01-1d56-4db6-ac76-cb8e2b62a659"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.390735 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9" (OuterVolumeSpecName: "kube-api-access-q4tf9") pod "5fec4d01-1d56-4db6-ac76-cb8e2b62a659" (UID: "5fec4d01-1d56-4db6-ac76-cb8e2b62a659"). InnerVolumeSpecName "kube-api-access-q4tf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.416997 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory" (OuterVolumeSpecName: "inventory") pod "5fec4d01-1d56-4db6-ac76-cb8e2b62a659" (UID: "5fec4d01-1d56-4db6-ac76-cb8e2b62a659"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.417035 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5fec4d01-1d56-4db6-ac76-cb8e2b62a659" (UID: "5fec4d01-1d56-4db6-ac76-cb8e2b62a659"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.485665 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.485700 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4tf9\" (UniqueName: \"kubernetes.io/projected/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-kube-api-access-q4tf9\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.485712 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.485727 4878 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fec4d01-1d56-4db6-ac76-cb8e2b62a659-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.809055 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" event={"ID":"5fec4d01-1d56-4db6-ac76-cb8e2b62a659","Type":"ContainerDied","Data":"16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2"} Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.809104 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16a9e83e514d1e34c49fd83f477a5255447f056c72d07f3b4cb90c7cb083d6e2" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.809107 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.971951 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr"] Dec 04 16:01:17 crc kubenswrapper[4878]: E1204 16:01:17.972496 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fec4d01-1d56-4db6-ac76-cb8e2b62a659" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.972516 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fec4d01-1d56-4db6-ac76-cb8e2b62a659" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:17 crc kubenswrapper[4878]: E1204 16:01:17.972559 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" containerName="keystone-cron" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.972567 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" containerName="keystone-cron" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.972792 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fec4d01-1d56-4db6-ac76-cb8e2b62a659" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.972821 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c" containerName="keystone-cron" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.973660 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.977003 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.977531 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.977550 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:01:17 crc kubenswrapper[4878]: I1204 16:01:17.978944 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.000073 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.000587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.000890 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg64v\" (UniqueName: \"kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.005444 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr"] Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.102468 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.102559 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.102591 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg64v\" (UniqueName: \"kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.108486 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.117768 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.119571 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg64v\" (UniqueName: \"kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2wlpr\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.300458 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:18 crc kubenswrapper[4878]: W1204 16:01:18.879456 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa06f0eb_73fb_4882_9dbd_bb7c4dfb11fb.slice/crio-a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e WatchSource:0}: Error finding container a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e: Status 404 returned error can't find the container with id a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e Dec 04 16:01:18 crc kubenswrapper[4878]: I1204 16:01:18.885469 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr"] Dec 04 16:01:19 crc kubenswrapper[4878]: I1204 16:01:19.834732 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" event={"ID":"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb","Type":"ContainerStarted","Data":"725d82d606f7aa782ec84b58b20eea2d9ade144dcfe6df4229c14dd8a127796c"} Dec 04 16:01:19 crc kubenswrapper[4878]: I1204 16:01:19.835318 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" event={"ID":"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb","Type":"ContainerStarted","Data":"a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e"} Dec 04 16:01:19 crc kubenswrapper[4878]: I1204 16:01:19.864079 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" podStartSLOduration=2.213920975 podStartE2EDuration="2.864050913s" podCreationTimestamp="2025-12-04 16:01:17 +0000 UTC" firstStartedPulling="2025-12-04 16:01:18.882820484 +0000 UTC m=+1522.845357440" lastFinishedPulling="2025-12-04 16:01:19.532950412 +0000 UTC m=+1523.495487378" observedRunningTime="2025-12-04 16:01:19.85598924 +0000 UTC m=+1523.818526206" watchObservedRunningTime="2025-12-04 16:01:19.864050913 +0000 UTC m=+1523.826587889" Dec 04 16:01:22 crc kubenswrapper[4878]: I1204 16:01:22.870035 4878 generic.go:334] "Generic (PLEG): container finished" podID="fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" containerID="725d82d606f7aa782ec84b58b20eea2d9ade144dcfe6df4229c14dd8a127796c" exitCode=0 Dec 04 16:01:22 crc kubenswrapper[4878]: I1204 16:01:22.870123 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" event={"ID":"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb","Type":"ContainerDied","Data":"725d82d606f7aa782ec84b58b20eea2d9ade144dcfe6df4229c14dd8a127796c"} Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.329779 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.439812 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory\") pod \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.440010 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg64v\" (UniqueName: \"kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v\") pod \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.440092 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key\") pod \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\" (UID: \"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb\") " Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.445748 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v" (OuterVolumeSpecName: "kube-api-access-qg64v") pod "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" (UID: "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb"). InnerVolumeSpecName "kube-api-access-qg64v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.474478 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory" (OuterVolumeSpecName: "inventory") pod "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" (UID: "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.474495 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" (UID: "fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.542791 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.542837 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg64v\" (UniqueName: \"kubernetes.io/projected/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-kube-api-access-qg64v\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.542850 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.893009 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" event={"ID":"fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb","Type":"ContainerDied","Data":"a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e"} Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.893291 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13599561a5592832e3a2a797d58829b135bcfd0d4946b17543038341051ca2e" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.893107 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2wlpr" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.969688 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl"] Dec 04 16:01:24 crc kubenswrapper[4878]: E1204 16:01:24.970275 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.970301 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.970552 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.971517 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.975661 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.975856 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.976007 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.977856 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:01:24 crc kubenswrapper[4878]: I1204 16:01:24.985735 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl"] Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.154762 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.154944 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.155027 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkmpq\" (UniqueName: \"kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.155100 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.257701 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.258050 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.258425 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.258704 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkmpq\" (UniqueName: \"kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.263411 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.269663 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.275461 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.278421 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkmpq\" (UniqueName: \"kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.295153 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.834052 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl"] Dec 04 16:01:25 crc kubenswrapper[4878]: I1204 16:01:25.906108 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" event={"ID":"844663ab-0b83-4d6a-9493-b8ce0743f963","Type":"ContainerStarted","Data":"0549b115ed5d84b0da28510e629200f9b1c807bdbe965ee53ad46d36b62cc845"} Dec 04 16:01:26 crc kubenswrapper[4878]: I1204 16:01:26.953906 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" event={"ID":"844663ab-0b83-4d6a-9493-b8ce0743f963","Type":"ContainerStarted","Data":"9dfb9ea0306fa0c7240244825ab4b08c13e81adf12042e33418d4bc1e7b5a391"} Dec 04 16:01:26 crc kubenswrapper[4878]: I1204 16:01:26.981113 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" podStartSLOduration=2.428993618 podStartE2EDuration="2.981089277s" podCreationTimestamp="2025-12-04 16:01:24 +0000 UTC" firstStartedPulling="2025-12-04 16:01:25.842595142 +0000 UTC m=+1529.805132098" lastFinishedPulling="2025-12-04 16:01:26.394690801 +0000 UTC m=+1530.357227757" observedRunningTime="2025-12-04 16:01:26.971614308 +0000 UTC m=+1530.934151264" watchObservedRunningTime="2025-12-04 16:01:26.981089277 +0000 UTC m=+1530.943626233" Dec 04 16:02:04 crc kubenswrapper[4878]: I1204 16:02:04.111499 4878 scope.go:117] "RemoveContainer" containerID="af8f0d0429f9116de445491ff3e1f6a451a3a12b27b61b49708dd8def9270850" Dec 04 16:02:04 crc kubenswrapper[4878]: I1204 16:02:04.134051 4878 scope.go:117] "RemoveContainer" containerID="db3eb9e4dc4ddebc00b939ce927137183c7e14fcf68138b3497f67a064b3969c" Dec 04 16:02:04 crc kubenswrapper[4878]: I1204 16:02:04.168792 4878 scope.go:117] "RemoveContainer" containerID="4c53099853e21956133269693abdf9a9ee5aace0f0fcce7c6a2caaf83304be86" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.300088 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.303576 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.321387 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.440162 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkq4d\" (UniqueName: \"kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.440501 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.440601 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.542626 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkq4d\" (UniqueName: \"kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.542728 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.542775 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.543462 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.543483 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.567338 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkq4d\" (UniqueName: \"kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d\") pod \"redhat-marketplace-jx5zx\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:36 crc kubenswrapper[4878]: I1204 16:02:36.630651 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:37 crc kubenswrapper[4878]: I1204 16:02:37.174435 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:37 crc kubenswrapper[4878]: I1204 16:02:37.686158 4878 generic.go:334] "Generic (PLEG): container finished" podID="89d35644-d161-4524-b5b9-585074193922" containerID="ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9" exitCode=0 Dec 04 16:02:37 crc kubenswrapper[4878]: I1204 16:02:37.686235 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerDied","Data":"ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9"} Dec 04 16:02:37 crc kubenswrapper[4878]: I1204 16:02:37.686472 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerStarted","Data":"e9718553c72176b69323293a197ecd53d8ea074b8cd6d176b364d3fac401b6f1"} Dec 04 16:02:38 crc kubenswrapper[4878]: I1204 16:02:38.698459 4878 generic.go:334] "Generic (PLEG): container finished" podID="89d35644-d161-4524-b5b9-585074193922" containerID="6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07" exitCode=0 Dec 04 16:02:38 crc kubenswrapper[4878]: I1204 16:02:38.698576 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerDied","Data":"6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07"} Dec 04 16:02:39 crc kubenswrapper[4878]: I1204 16:02:39.711641 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerStarted","Data":"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a"} Dec 04 16:02:39 crc kubenswrapper[4878]: I1204 16:02:39.741302 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jx5zx" podStartSLOduration=2.317951359 podStartE2EDuration="3.741275467s" podCreationTimestamp="2025-12-04 16:02:36 +0000 UTC" firstStartedPulling="2025-12-04 16:02:37.688424803 +0000 UTC m=+1601.650961759" lastFinishedPulling="2025-12-04 16:02:39.111748911 +0000 UTC m=+1603.074285867" observedRunningTime="2025-12-04 16:02:39.729551891 +0000 UTC m=+1603.692088847" watchObservedRunningTime="2025-12-04 16:02:39.741275467 +0000 UTC m=+1603.703812423" Dec 04 16:02:46 crc kubenswrapper[4878]: I1204 16:02:46.631613 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:46 crc kubenswrapper[4878]: I1204 16:02:46.632226 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:46 crc kubenswrapper[4878]: I1204 16:02:46.686647 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:46 crc kubenswrapper[4878]: I1204 16:02:46.837601 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:46 crc kubenswrapper[4878]: I1204 16:02:46.933289 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:48 crc kubenswrapper[4878]: I1204 16:02:48.817310 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jx5zx" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="registry-server" containerID="cri-o://08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a" gracePeriod=2 Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.293469 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.439327 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content\") pod \"89d35644-d161-4524-b5b9-585074193922\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.439581 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities\") pod \"89d35644-d161-4524-b5b9-585074193922\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.439773 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkq4d\" (UniqueName: \"kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d\") pod \"89d35644-d161-4524-b5b9-585074193922\" (UID: \"89d35644-d161-4524-b5b9-585074193922\") " Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.440587 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities" (OuterVolumeSpecName: "utilities") pod "89d35644-d161-4524-b5b9-585074193922" (UID: "89d35644-d161-4524-b5b9-585074193922"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.446989 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d" (OuterVolumeSpecName: "kube-api-access-nkq4d") pod "89d35644-d161-4524-b5b9-585074193922" (UID: "89d35644-d161-4524-b5b9-585074193922"). InnerVolumeSpecName "kube-api-access-nkq4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.462826 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89d35644-d161-4524-b5b9-585074193922" (UID: "89d35644-d161-4524-b5b9-585074193922"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.542573 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.542659 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkq4d\" (UniqueName: \"kubernetes.io/projected/89d35644-d161-4524-b5b9-585074193922-kube-api-access-nkq4d\") on node \"crc\" DevicePath \"\"" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.542672 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89d35644-d161-4524-b5b9-585074193922-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.835079 4878 generic.go:334] "Generic (PLEG): container finished" podID="89d35644-d161-4524-b5b9-585074193922" containerID="08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a" exitCode=0 Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.835131 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerDied","Data":"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a"} Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.835169 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx5zx" event={"ID":"89d35644-d161-4524-b5b9-585074193922","Type":"ContainerDied","Data":"e9718553c72176b69323293a197ecd53d8ea074b8cd6d176b364d3fac401b6f1"} Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.835191 4878 scope.go:117] "RemoveContainer" containerID="08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.835233 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx5zx" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.869232 4878 scope.go:117] "RemoveContainer" containerID="6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.931214 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.946665 4878 scope.go:117] "RemoveContainer" containerID="ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.950739 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx5zx"] Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.989196 4878 scope.go:117] "RemoveContainer" containerID="08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a" Dec 04 16:02:49 crc kubenswrapper[4878]: E1204 16:02:49.989841 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a\": container with ID starting with 08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a not found: ID does not exist" containerID="08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.989896 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a"} err="failed to get container status \"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a\": rpc error: code = NotFound desc = could not find container \"08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a\": container with ID starting with 08c0600a8315520f4616f5476258431c0c6ccbdfacd0dbf1615a3ce56d516f2a not found: ID does not exist" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.989920 4878 scope.go:117] "RemoveContainer" containerID="6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07" Dec 04 16:02:49 crc kubenswrapper[4878]: E1204 16:02:49.990454 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07\": container with ID starting with 6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07 not found: ID does not exist" containerID="6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.990497 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07"} err="failed to get container status \"6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07\": rpc error: code = NotFound desc = could not find container \"6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07\": container with ID starting with 6bc1839817cfda593859226fa48ea24c7e94e56b8d069cd85337859d6ea16b07 not found: ID does not exist" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.990523 4878 scope.go:117] "RemoveContainer" containerID="ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9" Dec 04 16:02:49 crc kubenswrapper[4878]: E1204 16:02:49.990919 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9\": container with ID starting with ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9 not found: ID does not exist" containerID="ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9" Dec 04 16:02:49 crc kubenswrapper[4878]: I1204 16:02:49.990946 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9"} err="failed to get container status \"ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9\": rpc error: code = NotFound desc = could not find container \"ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9\": container with ID starting with ea76c188b8c8fa73f4edfc586b45aab3d2cd449db1e07a44ec7603f3d656eee9 not found: ID does not exist" Dec 04 16:02:51 crc kubenswrapper[4878]: I1204 16:02:51.198780 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d35644-d161-4524-b5b9-585074193922" path="/var/lib/kubelet/pods/89d35644-d161-4524-b5b9-585074193922/volumes" Dec 04 16:03:00 crc kubenswrapper[4878]: I1204 16:03:00.840786 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:03:00 crc kubenswrapper[4878]: I1204 16:03:00.842016 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.052571 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:16 crc kubenswrapper[4878]: E1204 16:03:16.053685 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="registry-server" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.053702 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="registry-server" Dec 04 16:03:16 crc kubenswrapper[4878]: E1204 16:03:16.053760 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="extract-content" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.053768 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="extract-content" Dec 04 16:03:16 crc kubenswrapper[4878]: E1204 16:03:16.053781 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="extract-utilities" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.053790 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="extract-utilities" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.053990 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d35644-d161-4524-b5b9-585074193922" containerName="registry-server" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.055505 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.070938 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.119685 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.120062 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.120139 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8gm\" (UniqueName: \"kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.221735 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.221813 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8gm\" (UniqueName: \"kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.221897 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.222536 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.222597 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.251843 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8gm\" (UniqueName: \"kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm\") pod \"certified-operators-dsrj9\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.382684 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:16 crc kubenswrapper[4878]: I1204 16:03:16.921666 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:17 crc kubenswrapper[4878]: I1204 16:03:17.105125 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerStarted","Data":"399e5aebca1f57e9a224065220f789209d2233548e30b2beb127437e5ee0269b"} Dec 04 16:03:18 crc kubenswrapper[4878]: I1204 16:03:18.120471 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerID="4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a" exitCode=0 Dec 04 16:03:18 crc kubenswrapper[4878]: I1204 16:03:18.120730 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerDied","Data":"4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a"} Dec 04 16:03:18 crc kubenswrapper[4878]: I1204 16:03:18.123955 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:03:19 crc kubenswrapper[4878]: I1204 16:03:19.131511 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerStarted","Data":"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450"} Dec 04 16:03:20 crc kubenswrapper[4878]: I1204 16:03:20.142713 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerID="dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450" exitCode=0 Dec 04 16:03:20 crc kubenswrapper[4878]: I1204 16:03:20.142772 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerDied","Data":"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450"} Dec 04 16:03:21 crc kubenswrapper[4878]: I1204 16:03:21.155339 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerStarted","Data":"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4"} Dec 04 16:03:21 crc kubenswrapper[4878]: I1204 16:03:21.182838 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dsrj9" podStartSLOduration=2.707797632 podStartE2EDuration="5.182815985s" podCreationTimestamp="2025-12-04 16:03:16 +0000 UTC" firstStartedPulling="2025-12-04 16:03:18.123432941 +0000 UTC m=+1642.085969897" lastFinishedPulling="2025-12-04 16:03:20.598451294 +0000 UTC m=+1644.560988250" observedRunningTime="2025-12-04 16:03:21.181565454 +0000 UTC m=+1645.144102430" watchObservedRunningTime="2025-12-04 16:03:21.182815985 +0000 UTC m=+1645.145352941" Dec 04 16:03:26 crc kubenswrapper[4878]: I1204 16:03:26.383695 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:26 crc kubenswrapper[4878]: I1204 16:03:26.384449 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:26 crc kubenswrapper[4878]: I1204 16:03:26.450201 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:27 crc kubenswrapper[4878]: I1204 16:03:27.260112 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:27 crc kubenswrapper[4878]: I1204 16:03:27.319545 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.237257 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dsrj9" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="registry-server" containerID="cri-o://ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4" gracePeriod=2 Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.917030 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.922365 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8gm\" (UniqueName: \"kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm\") pod \"b5a628e7-3a75-48d9-999a-96beebc623b4\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.922732 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities\") pod \"b5a628e7-3a75-48d9-999a-96beebc623b4\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.922886 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content\") pod \"b5a628e7-3a75-48d9-999a-96beebc623b4\" (UID: \"b5a628e7-3a75-48d9-999a-96beebc623b4\") " Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.923922 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities" (OuterVolumeSpecName: "utilities") pod "b5a628e7-3a75-48d9-999a-96beebc623b4" (UID: "b5a628e7-3a75-48d9-999a-96beebc623b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.930169 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:03:29 crc kubenswrapper[4878]: I1204 16:03:29.932363 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm" (OuterVolumeSpecName: "kube-api-access-xd8gm") pod "b5a628e7-3a75-48d9-999a-96beebc623b4" (UID: "b5a628e7-3a75-48d9-999a-96beebc623b4"). InnerVolumeSpecName "kube-api-access-xd8gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.022059 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5a628e7-3a75-48d9-999a-96beebc623b4" (UID: "b5a628e7-3a75-48d9-999a-96beebc623b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.032726 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5a628e7-3a75-48d9-999a-96beebc623b4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.032770 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8gm\" (UniqueName: \"kubernetes.io/projected/b5a628e7-3a75-48d9-999a-96beebc623b4-kube-api-access-xd8gm\") on node \"crc\" DevicePath \"\"" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.248127 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerID="ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4" exitCode=0 Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.248173 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerDied","Data":"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4"} Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.248242 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsrj9" event={"ID":"b5a628e7-3a75-48d9-999a-96beebc623b4","Type":"ContainerDied","Data":"399e5aebca1f57e9a224065220f789209d2233548e30b2beb127437e5ee0269b"} Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.248267 4878 scope.go:117] "RemoveContainer" containerID="ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.248414 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsrj9" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.283496 4878 scope.go:117] "RemoveContainer" containerID="dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.287374 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.298234 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dsrj9"] Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.312332 4878 scope.go:117] "RemoveContainer" containerID="4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.361167 4878 scope.go:117] "RemoveContainer" containerID="ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4" Dec 04 16:03:30 crc kubenswrapper[4878]: E1204 16:03:30.371065 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4\": container with ID starting with ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4 not found: ID does not exist" containerID="ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.371138 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4"} err="failed to get container status \"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4\": rpc error: code = NotFound desc = could not find container \"ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4\": container with ID starting with ac334c1215cb86071eca477077c826b2483174aae8a57ee6e8d77ca66a7fddb4 not found: ID does not exist" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.371172 4878 scope.go:117] "RemoveContainer" containerID="dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450" Dec 04 16:03:30 crc kubenswrapper[4878]: E1204 16:03:30.373107 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450\": container with ID starting with dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450 not found: ID does not exist" containerID="dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.373224 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450"} err="failed to get container status \"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450\": rpc error: code = NotFound desc = could not find container \"dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450\": container with ID starting with dd268cbb5b25d7acaaa81666be30bc326396d61491fdc2db048fa0c3bc630450 not found: ID does not exist" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.373684 4878 scope.go:117] "RemoveContainer" containerID="4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a" Dec 04 16:03:30 crc kubenswrapper[4878]: E1204 16:03:30.374111 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a\": container with ID starting with 4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a not found: ID does not exist" containerID="4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.374150 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a"} err="failed to get container status \"4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a\": rpc error: code = NotFound desc = could not find container \"4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a\": container with ID starting with 4706b2c046938615df74597e020db9c7a2f09665be8914bcdac06bb7f7d7c27a not found: ID does not exist" Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.840145 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:03:30 crc kubenswrapper[4878]: I1204 16:03:30.840230 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:03:31 crc kubenswrapper[4878]: I1204 16:03:31.191398 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" path="/var/lib/kubelet/pods/b5a628e7-3a75-48d9-999a-96beebc623b4/volumes" Dec 04 16:04:00 crc kubenswrapper[4878]: I1204 16:04:00.840832 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:04:00 crc kubenswrapper[4878]: I1204 16:04:00.841427 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:04:00 crc kubenswrapper[4878]: I1204 16:04:00.841476 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:04:00 crc kubenswrapper[4878]: I1204 16:04:00.842353 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:04:00 crc kubenswrapper[4878]: I1204 16:04:00.842415 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" gracePeriod=600 Dec 04 16:04:00 crc kubenswrapper[4878]: E1204 16:04:00.979412 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:04:01 crc kubenswrapper[4878]: I1204 16:04:01.593582 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" exitCode=0 Dec 04 16:04:01 crc kubenswrapper[4878]: I1204 16:04:01.593636 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46"} Dec 04 16:04:01 crc kubenswrapper[4878]: I1204 16:04:01.593702 4878 scope.go:117] "RemoveContainer" containerID="e70960a91382094bb97b7778803753c08510ffcdf745328cfe037d41064c7754" Dec 04 16:04:01 crc kubenswrapper[4878]: I1204 16:04:01.594558 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:04:01 crc kubenswrapper[4878]: E1204 16:04:01.594844 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.964847 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:11 crc kubenswrapper[4878]: E1204 16:04:11.965840 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="registry-server" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.965857 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="registry-server" Dec 04 16:04:11 crc kubenswrapper[4878]: E1204 16:04:11.965889 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="extract-utilities" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.965897 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="extract-utilities" Dec 04 16:04:11 crc kubenswrapper[4878]: E1204 16:04:11.965914 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="extract-content" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.965920 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="extract-content" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.966215 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a628e7-3a75-48d9-999a-96beebc623b4" containerName="registry-server" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.968311 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:11 crc kubenswrapper[4878]: I1204 16:04:11.981857 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.016409 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.016590 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj8gr\" (UniqueName: \"kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.016812 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.119737 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.119896 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj8gr\" (UniqueName: \"kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.119993 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.158638 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.158638 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.164503 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj8gr\" (UniqueName: \"kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr\") pod \"community-operators-lfhg5\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.330756 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:12 crc kubenswrapper[4878]: I1204 16:04:12.868566 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:13 crc kubenswrapper[4878]: I1204 16:04:13.730515 4878 generic.go:334] "Generic (PLEG): container finished" podID="ea634861-f0f1-4e56-b633-3e5705971450" containerID="6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b" exitCode=0 Dec 04 16:04:13 crc kubenswrapper[4878]: I1204 16:04:13.730585 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerDied","Data":"6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b"} Dec 04 16:04:13 crc kubenswrapper[4878]: I1204 16:04:13.731038 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerStarted","Data":"605c71b555d16645b71fbd79e32d7bfbf67d880ac95696732094017c1b685a47"} Dec 04 16:04:14 crc kubenswrapper[4878]: I1204 16:04:14.771212 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerStarted","Data":"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a"} Dec 04 16:04:15 crc kubenswrapper[4878]: I1204 16:04:15.180193 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:04:15 crc kubenswrapper[4878]: E1204 16:04:15.180821 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:04:15 crc kubenswrapper[4878]: I1204 16:04:15.785464 4878 generic.go:334] "Generic (PLEG): container finished" podID="ea634861-f0f1-4e56-b633-3e5705971450" containerID="f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a" exitCode=0 Dec 04 16:04:15 crc kubenswrapper[4878]: I1204 16:04:15.785548 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerDied","Data":"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a"} Dec 04 16:04:16 crc kubenswrapper[4878]: I1204 16:04:16.798046 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerStarted","Data":"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd"} Dec 04 16:04:16 crc kubenswrapper[4878]: I1204 16:04:16.828137 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lfhg5" podStartSLOduration=3.224508551 podStartE2EDuration="5.828110808s" podCreationTimestamp="2025-12-04 16:04:11 +0000 UTC" firstStartedPulling="2025-12-04 16:04:13.732443425 +0000 UTC m=+1697.694980381" lastFinishedPulling="2025-12-04 16:04:16.336045682 +0000 UTC m=+1700.298582638" observedRunningTime="2025-12-04 16:04:16.81989324 +0000 UTC m=+1700.782430196" watchObservedRunningTime="2025-12-04 16:04:16.828110808 +0000 UTC m=+1700.790647764" Dec 04 16:04:22 crc kubenswrapper[4878]: I1204 16:04:22.332201 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:22 crc kubenswrapper[4878]: I1204 16:04:22.332715 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:22 crc kubenswrapper[4878]: I1204 16:04:22.384948 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:22 crc kubenswrapper[4878]: I1204 16:04:22.903212 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:22 crc kubenswrapper[4878]: I1204 16:04:22.960827 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:24 crc kubenswrapper[4878]: I1204 16:04:24.874424 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lfhg5" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="registry-server" containerID="cri-o://c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd" gracePeriod=2 Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.811767 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.841549 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities\") pod \"ea634861-f0f1-4e56-b633-3e5705971450\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.841754 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj8gr\" (UniqueName: \"kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr\") pod \"ea634861-f0f1-4e56-b633-3e5705971450\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.841837 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content\") pod \"ea634861-f0f1-4e56-b633-3e5705971450\" (UID: \"ea634861-f0f1-4e56-b633-3e5705971450\") " Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.844451 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities" (OuterVolumeSpecName: "utilities") pod "ea634861-f0f1-4e56-b633-3e5705971450" (UID: "ea634861-f0f1-4e56-b633-3e5705971450"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.883209 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr" (OuterVolumeSpecName: "kube-api-access-mj8gr") pod "ea634861-f0f1-4e56-b633-3e5705971450" (UID: "ea634861-f0f1-4e56-b633-3e5705971450"). InnerVolumeSpecName "kube-api-access-mj8gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.895488 4878 generic.go:334] "Generic (PLEG): container finished" podID="ea634861-f0f1-4e56-b633-3e5705971450" containerID="c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd" exitCode=0 Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.895694 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lfhg5" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.896723 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerDied","Data":"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd"} Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.896757 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lfhg5" event={"ID":"ea634861-f0f1-4e56-b633-3e5705971450","Type":"ContainerDied","Data":"605c71b555d16645b71fbd79e32d7bfbf67d880ac95696732094017c1b685a47"} Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.896775 4878 scope.go:117] "RemoveContainer" containerID="c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.901953 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea634861-f0f1-4e56-b633-3e5705971450" (UID: "ea634861-f0f1-4e56-b633-3e5705971450"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.920629 4878 scope.go:117] "RemoveContainer" containerID="f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.944682 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj8gr\" (UniqueName: \"kubernetes.io/projected/ea634861-f0f1-4e56-b633-3e5705971450-kube-api-access-mj8gr\") on node \"crc\" DevicePath \"\"" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.944726 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.944737 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea634861-f0f1-4e56-b633-3e5705971450-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.947148 4878 scope.go:117] "RemoveContainer" containerID="6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.989082 4878 scope.go:117] "RemoveContainer" containerID="c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd" Dec 04 16:04:25 crc kubenswrapper[4878]: E1204 16:04:25.989620 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd\": container with ID starting with c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd not found: ID does not exist" containerID="c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.989686 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd"} err="failed to get container status \"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd\": rpc error: code = NotFound desc = could not find container \"c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd\": container with ID starting with c54f123d0dee6a208dc060f9affe1d1f39c724e6514023b763d1ba7c7fd13cfd not found: ID does not exist" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.989717 4878 scope.go:117] "RemoveContainer" containerID="f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a" Dec 04 16:04:25 crc kubenswrapper[4878]: E1204 16:04:25.990285 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a\": container with ID starting with f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a not found: ID does not exist" containerID="f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.990321 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a"} err="failed to get container status \"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a\": rpc error: code = NotFound desc = could not find container \"f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a\": container with ID starting with f88cca58c8e890c9ee2a53bcea7b8ec71bf1b0a77cca0f7ac3e9af6bee29204a not found: ID does not exist" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.990349 4878 scope.go:117] "RemoveContainer" containerID="6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b" Dec 04 16:04:25 crc kubenswrapper[4878]: E1204 16:04:25.990806 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b\": container with ID starting with 6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b not found: ID does not exist" containerID="6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b" Dec 04 16:04:25 crc kubenswrapper[4878]: I1204 16:04:25.990855 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b"} err="failed to get container status \"6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b\": rpc error: code = NotFound desc = could not find container \"6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b\": container with ID starting with 6ef8cc25b1de5af1996b56b0bc5f9ec141ce6ed962c21bb0421ed11b3f93b99b not found: ID does not exist" Dec 04 16:04:26 crc kubenswrapper[4878]: I1204 16:04:26.180688 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:04:26 crc kubenswrapper[4878]: E1204 16:04:26.181175 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:04:26 crc kubenswrapper[4878]: I1204 16:04:26.233152 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:26 crc kubenswrapper[4878]: I1204 16:04:26.243417 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lfhg5"] Dec 04 16:04:27 crc kubenswrapper[4878]: I1204 16:04:27.193118 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea634861-f0f1-4e56-b633-3e5705971450" path="/var/lib/kubelet/pods/ea634861-f0f1-4e56-b633-3e5705971450/volumes" Dec 04 16:04:40 crc kubenswrapper[4878]: I1204 16:04:40.180422 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:04:40 crc kubenswrapper[4878]: E1204 16:04:40.181223 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:04:53 crc kubenswrapper[4878]: I1204 16:04:53.180683 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:04:53 crc kubenswrapper[4878]: E1204 16:04:53.182084 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.060167 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8ba5-account-create-update-8k8xj"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.077230 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-7c8e-account-create-update-s7lxm"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.095643 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-cpfz9"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.104887 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-90b3-account-create-update-hm8lq"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.114062 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-mrm88"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.125606 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-4gmz6"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.135524 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8ba5-account-create-update-8k8xj"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.145919 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-mrm88"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.154691 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-cpfz9"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.163701 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-7c8e-account-create-update-s7lxm"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.172434 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-90b3-account-create-update-hm8lq"] Dec 04 16:05:00 crc kubenswrapper[4878]: I1204 16:05:00.181589 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-4gmz6"] Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.196574 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2737dc43-bf57-49c6-ab53-06ba48bfc80a" path="/var/lib/kubelet/pods/2737dc43-bf57-49c6-ab53-06ba48bfc80a/volumes" Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.198251 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2aa82d2a-1f5f-4689-bb38-fc00144e2174" path="/var/lib/kubelet/pods/2aa82d2a-1f5f-4689-bb38-fc00144e2174/volumes" Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.199034 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44154d84-cb0b-4894-9b50-4a93fafc5136" path="/var/lib/kubelet/pods/44154d84-cb0b-4894-9b50-4a93fafc5136/volumes" Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.199776 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55939a1d-54f3-4c84-a201-dc129636438b" path="/var/lib/kubelet/pods/55939a1d-54f3-4c84-a201-dc129636438b/volumes" Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.201889 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dda6827-59a8-4cdf-9446-555d17a5793a" path="/var/lib/kubelet/pods/7dda6827-59a8-4cdf-9446-555d17a5793a/volumes" Dec 04 16:05:01 crc kubenswrapper[4878]: I1204 16:05:01.202634 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa6917aa-17e3-4bff-b2e9-c5101344b039" path="/var/lib/kubelet/pods/fa6917aa-17e3-4bff-b2e9-c5101344b039/volumes" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.180014 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:05:04 crc kubenswrapper[4878]: E1204 16:05:04.180497 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.381633 4878 scope.go:117] "RemoveContainer" containerID="bc31cf9cbe560c148b9d233c18d41b8afe51829ddc31bba4725f47e2778c6455" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.410164 4878 scope.go:117] "RemoveContainer" containerID="8f039a9ac2f7c154f5e6bf61bdd9cd061eb8847b21939c5279350a0937f3db44" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.458145 4878 scope.go:117] "RemoveContainer" containerID="ef1c70716a55a667d49a91cfd53cd687e55221ee9c50714540258a8479327c28" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.510372 4878 scope.go:117] "RemoveContainer" containerID="1b7794a84c84c939e2d37b3a52286d63f8ba17f7b2b9ca711e4e055240d86b93" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.552288 4878 scope.go:117] "RemoveContainer" containerID="d2d2dd892ba97197e161dc0633a5ec3d2e7f17c30965eacdc60641d615fdcb93" Dec 04 16:05:04 crc kubenswrapper[4878]: I1204 16:05:04.598742 4878 scope.go:117] "RemoveContainer" containerID="4b482636a5d728ff14eb52c5711a9c4a87266c3d0ba8e23ea72c7c2f8e30b293" Dec 04 16:05:13 crc kubenswrapper[4878]: I1204 16:05:13.417011 4878 generic.go:334] "Generic (PLEG): container finished" podID="844663ab-0b83-4d6a-9493-b8ce0743f963" containerID="9dfb9ea0306fa0c7240244825ab4b08c13e81adf12042e33418d4bc1e7b5a391" exitCode=0 Dec 04 16:05:13 crc kubenswrapper[4878]: I1204 16:05:13.417106 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" event={"ID":"844663ab-0b83-4d6a-9493-b8ce0743f963","Type":"ContainerDied","Data":"9dfb9ea0306fa0c7240244825ab4b08c13e81adf12042e33418d4bc1e7b5a391"} Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.838653 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.877248 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key\") pod \"844663ab-0b83-4d6a-9493-b8ce0743f963\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.877743 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkmpq\" (UniqueName: \"kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq\") pod \"844663ab-0b83-4d6a-9493-b8ce0743f963\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.877859 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle\") pod \"844663ab-0b83-4d6a-9493-b8ce0743f963\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.877915 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory\") pod \"844663ab-0b83-4d6a-9493-b8ce0743f963\" (UID: \"844663ab-0b83-4d6a-9493-b8ce0743f963\") " Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.886452 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "844663ab-0b83-4d6a-9493-b8ce0743f963" (UID: "844663ab-0b83-4d6a-9493-b8ce0743f963"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.886502 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq" (OuterVolumeSpecName: "kube-api-access-pkmpq") pod "844663ab-0b83-4d6a-9493-b8ce0743f963" (UID: "844663ab-0b83-4d6a-9493-b8ce0743f963"). InnerVolumeSpecName "kube-api-access-pkmpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.910377 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "844663ab-0b83-4d6a-9493-b8ce0743f963" (UID: "844663ab-0b83-4d6a-9493-b8ce0743f963"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.912742 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory" (OuterVolumeSpecName: "inventory") pod "844663ab-0b83-4d6a-9493-b8ce0743f963" (UID: "844663ab-0b83-4d6a-9493-b8ce0743f963"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.981076 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.981331 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkmpq\" (UniqueName: \"kubernetes.io/projected/844663ab-0b83-4d6a-9493-b8ce0743f963-kube-api-access-pkmpq\") on node \"crc\" DevicePath \"\"" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.981455 4878 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:05:14 crc kubenswrapper[4878]: I1204 16:05:14.981560 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/844663ab-0b83-4d6a-9493-b8ce0743f963-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.446806 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" event={"ID":"844663ab-0b83-4d6a-9493-b8ce0743f963","Type":"ContainerDied","Data":"0549b115ed5d84b0da28510e629200f9b1c807bdbe965ee53ad46d36b62cc845"} Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.446883 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0549b115ed5d84b0da28510e629200f9b1c807bdbe965ee53ad46d36b62cc845" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.446891 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.542614 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl"] Dec 04 16:05:15 crc kubenswrapper[4878]: E1204 16:05:15.543215 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="extract-content" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543240 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="extract-content" Dec 04 16:05:15 crc kubenswrapper[4878]: E1204 16:05:15.543254 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="844663ab-0b83-4d6a-9493-b8ce0743f963" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543264 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="844663ab-0b83-4d6a-9493-b8ce0743f963" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 16:05:15 crc kubenswrapper[4878]: E1204 16:05:15.543296 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="registry-server" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543304 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="registry-server" Dec 04 16:05:15 crc kubenswrapper[4878]: E1204 16:05:15.543316 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="extract-utilities" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543324 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="extract-utilities" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543536 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="844663ab-0b83-4d6a-9493-b8ce0743f963" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.543573 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea634861-f0f1-4e56-b633-3e5705971450" containerName="registry-server" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.544503 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.547609 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.547744 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.549161 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.551149 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.558126 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl"] Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.694192 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-754hh\" (UniqueName: \"kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.694238 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.694296 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.796921 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-754hh\" (UniqueName: \"kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.796986 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.797087 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.802767 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.806181 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.821652 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-754hh\" (UniqueName: \"kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:15 crc kubenswrapper[4878]: I1204 16:05:15.864642 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:05:16 crc kubenswrapper[4878]: I1204 16:05:16.180088 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:05:16 crc kubenswrapper[4878]: E1204 16:05:16.180704 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:05:16 crc kubenswrapper[4878]: W1204 16:05:16.413088 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56812292_222d_4323_86ad_30023b9862b0.slice/crio-9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de WatchSource:0}: Error finding container 9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de: Status 404 returned error can't find the container with id 9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de Dec 04 16:05:16 crc kubenswrapper[4878]: I1204 16:05:16.420984 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl"] Dec 04 16:05:16 crc kubenswrapper[4878]: I1204 16:05:16.459007 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" event={"ID":"56812292-222d-4323-86ad-30023b9862b0","Type":"ContainerStarted","Data":"9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de"} Dec 04 16:05:17 crc kubenswrapper[4878]: I1204 16:05:17.482310 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" event={"ID":"56812292-222d-4323-86ad-30023b9862b0","Type":"ContainerStarted","Data":"23a76ecf87d3073db0c1d8dab81cf8e452491c5d4880835fde10e351f48d4901"} Dec 04 16:05:17 crc kubenswrapper[4878]: I1204 16:05:17.507468 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" podStartSLOduration=2.033679864 podStartE2EDuration="2.507443708s" podCreationTimestamp="2025-12-04 16:05:15 +0000 UTC" firstStartedPulling="2025-12-04 16:05:16.415519949 +0000 UTC m=+1760.378056905" lastFinishedPulling="2025-12-04 16:05:16.889283793 +0000 UTC m=+1760.851820749" observedRunningTime="2025-12-04 16:05:17.502678938 +0000 UTC m=+1761.465215914" watchObservedRunningTime="2025-12-04 16:05:17.507443708 +0000 UTC m=+1761.469980664" Dec 04 16:05:31 crc kubenswrapper[4878]: I1204 16:05:31.180008 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:05:31 crc kubenswrapper[4878]: E1204 16:05:31.181261 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:05:46 crc kubenswrapper[4878]: I1204 16:05:46.179827 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:05:46 crc kubenswrapper[4878]: E1204 16:05:46.180595 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:05:47 crc kubenswrapper[4878]: I1204 16:05:47.049354 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-vlv79"] Dec 04 16:05:47 crc kubenswrapper[4878]: I1204 16:05:47.058062 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-vlv79"] Dec 04 16:05:47 crc kubenswrapper[4878]: I1204 16:05:47.192776 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b32fab-0a73-417d-af80-9b289421b529" path="/var/lib/kubelet/pods/15b32fab-0a73-417d-af80-9b289421b529/volumes" Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.034078 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rcch2"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.044458 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5b99-account-create-update-tp2qw"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.053838 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-g7jwn"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.063059 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-g7jwn"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.071278 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5b99-account-create-update-tp2qw"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.078820 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rcch2"] Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.190543 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e84f13-7faf-4acb-bee8-57e817842089" path="/var/lib/kubelet/pods/c1e84f13-7faf-4acb-bee8-57e817842089/volumes" Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.191200 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d96ded51-89ea-4af0-916f-5f63afd77cfa" path="/var/lib/kubelet/pods/d96ded51-89ea-4af0-916f-5f63afd77cfa/volumes" Dec 04 16:05:49 crc kubenswrapper[4878]: I1204 16:05:49.191836 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f80534b0-7801-4e06-a5a4-54c6cc79fe4c" path="/var/lib/kubelet/pods/f80534b0-7801-4e06-a5a4-54c6cc79fe4c/volumes" Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.035754 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-684a-account-create-update-pwjmt"] Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.047106 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mn59d"] Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.060849 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5f07-account-create-update-cbntx"] Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.073199 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-684a-account-create-update-pwjmt"] Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.083965 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mn59d"] Dec 04 16:05:50 crc kubenswrapper[4878]: I1204 16:05:50.095323 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5f07-account-create-update-cbntx"] Dec 04 16:05:51 crc kubenswrapper[4878]: I1204 16:05:51.191244 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cfc7e4-1de9-400b-8c2c-c225aabbae69" path="/var/lib/kubelet/pods/08cfc7e4-1de9-400b-8c2c-c225aabbae69/volumes" Dec 04 16:05:51 crc kubenswrapper[4878]: I1204 16:05:51.191951 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ef8e917-5db2-471b-b047-6d61d46162bc" path="/var/lib/kubelet/pods/7ef8e917-5db2-471b-b047-6d61d46162bc/volumes" Dec 04 16:05:51 crc kubenswrapper[4878]: I1204 16:05:51.192528 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0dafad-a741-434b-9b7d-72a301c16d46" path="/var/lib/kubelet/pods/dc0dafad-a741-434b-9b7d-72a301c16d46/volumes" Dec 04 16:05:59 crc kubenswrapper[4878]: I1204 16:05:59.045363 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ksg95"] Dec 04 16:05:59 crc kubenswrapper[4878]: I1204 16:05:59.055007 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ksg95"] Dec 04 16:05:59 crc kubenswrapper[4878]: I1204 16:05:59.191540 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d64f9f6-f276-43d2-b298-95d7a51d7247" path="/var/lib/kubelet/pods/9d64f9f6-f276-43d2-b298-95d7a51d7247/volumes" Dec 04 16:06:00 crc kubenswrapper[4878]: I1204 16:06:00.180376 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:06:00 crc kubenswrapper[4878]: E1204 16:06:00.181221 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:06:04 crc kubenswrapper[4878]: I1204 16:06:04.775244 4878 scope.go:117] "RemoveContainer" containerID="5da97d3802ab0720ac319f116112f8bae51550befc2c6698f9837a3fca7eb9ce" Dec 04 16:06:04 crc kubenswrapper[4878]: I1204 16:06:04.803695 4878 scope.go:117] "RemoveContainer" containerID="d24af50360106a23912099fe8aecf0ca60393daf7d7931be40be9c3e9bae5223" Dec 04 16:06:04 crc kubenswrapper[4878]: I1204 16:06:04.855896 4878 scope.go:117] "RemoveContainer" containerID="0d64ae47fdde18622d3f0442b21f13a323375fb0ab95eb0bbfed3344c4fb177f" Dec 04 16:06:04 crc kubenswrapper[4878]: I1204 16:06:04.916209 4878 scope.go:117] "RemoveContainer" containerID="69b4f9b0a939474106302584e690ee1ee8affe015477338afaf80e128960078f" Dec 04 16:06:04 crc kubenswrapper[4878]: I1204 16:06:04.949636 4878 scope.go:117] "RemoveContainer" containerID="1ec0c51e8a15c821a870901d8a423d3e757cf95f5ed282c484c8093268a98863" Dec 04 16:06:05 crc kubenswrapper[4878]: I1204 16:06:05.028451 4878 scope.go:117] "RemoveContainer" containerID="1cc03292fc04da1e82c691fd5819818c607331bf4d7edd89b5d2f4c90863622e" Dec 04 16:06:05 crc kubenswrapper[4878]: I1204 16:06:05.055363 4878 scope.go:117] "RemoveContainer" containerID="93b0c37db41560dc9e1f3aabe07f56c095621d8dcc83f3828920f0a16eb17a5b" Dec 04 16:06:05 crc kubenswrapper[4878]: I1204 16:06:05.076089 4878 scope.go:117] "RemoveContainer" containerID="0e1ec4269a66e30a56708646cfd6218ab77564995f125aae2e0943b0af60fc0f" Dec 04 16:06:05 crc kubenswrapper[4878]: I1204 16:06:05.096063 4878 scope.go:117] "RemoveContainer" containerID="d29a6a166de080e8e0ab9a10be01628c48794d45a404f3a043dd3c5d929066bf" Dec 04 16:06:05 crc kubenswrapper[4878]: I1204 16:06:05.129738 4878 scope.go:117] "RemoveContainer" containerID="315fafdd44eef789649674294b3c53407252fb909e522008db059c2c0d3bb0c2" Dec 04 16:06:13 crc kubenswrapper[4878]: I1204 16:06:13.180483 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:06:13 crc kubenswrapper[4878]: E1204 16:06:13.183668 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:06:27 crc kubenswrapper[4878]: I1204 16:06:27.188104 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:06:27 crc kubenswrapper[4878]: E1204 16:06:27.188828 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:06:34 crc kubenswrapper[4878]: I1204 16:06:34.046554 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-4pqmg"] Dec 04 16:06:34 crc kubenswrapper[4878]: I1204 16:06:34.056783 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-4pqmg"] Dec 04 16:06:35 crc kubenswrapper[4878]: I1204 16:06:35.191987 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67596249-6134-4ecd-8c9f-865a51c1cbfa" path="/var/lib/kubelet/pods/67596249-6134-4ecd-8c9f-865a51c1cbfa/volumes" Dec 04 16:06:38 crc kubenswrapper[4878]: I1204 16:06:38.179976 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:06:38 crc kubenswrapper[4878]: E1204 16:06:38.180693 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:06:48 crc kubenswrapper[4878]: I1204 16:06:48.041064 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-rgck8"] Dec 04 16:06:48 crc kubenswrapper[4878]: I1204 16:06:48.055484 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-rgck8"] Dec 04 16:06:49 crc kubenswrapper[4878]: I1204 16:06:49.192391 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b6c7cc6-40e3-44ff-bd1c-6741af643002" path="/var/lib/kubelet/pods/4b6c7cc6-40e3-44ff-bd1c-6741af643002/volumes" Dec 04 16:06:52 crc kubenswrapper[4878]: I1204 16:06:52.180454 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:06:52 crc kubenswrapper[4878]: E1204 16:06:52.181308 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:06:58 crc kubenswrapper[4878]: I1204 16:06:58.045527 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-vm2hn"] Dec 04 16:06:58 crc kubenswrapper[4878]: I1204 16:06:58.056383 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-vm2hn"] Dec 04 16:06:59 crc kubenswrapper[4878]: I1204 16:06:59.193218 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7a20413-55ed-48d6-98c3-0bd98368deaa" path="/var/lib/kubelet/pods/d7a20413-55ed-48d6-98c3-0bd98368deaa/volumes" Dec 04 16:07:00 crc kubenswrapper[4878]: I1204 16:07:00.044839 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-sljcs"] Dec 04 16:07:00 crc kubenswrapper[4878]: I1204 16:07:00.057282 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-874tf"] Dec 04 16:07:00 crc kubenswrapper[4878]: I1204 16:07:00.067506 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-sljcs"] Dec 04 16:07:00 crc kubenswrapper[4878]: I1204 16:07:00.076528 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-874tf"] Dec 04 16:07:01 crc kubenswrapper[4878]: I1204 16:07:01.191422 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e69f1bb-0019-4fee-b04b-d4e6319c61db" path="/var/lib/kubelet/pods/9e69f1bb-0019-4fee-b04b-d4e6319c61db/volumes" Dec 04 16:07:01 crc kubenswrapper[4878]: I1204 16:07:01.192267 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b4a412-5105-473d-8037-1b43c331046b" path="/var/lib/kubelet/pods/b7b4a412-5105-473d-8037-1b43c331046b/volumes" Dec 04 16:07:05 crc kubenswrapper[4878]: I1204 16:07:05.374291 4878 scope.go:117] "RemoveContainer" containerID="b8fe6232990f712f6fc840b67824c706e9c9c437243418f8234d000631b44808" Dec 04 16:07:05 crc kubenswrapper[4878]: I1204 16:07:05.416783 4878 scope.go:117] "RemoveContainer" containerID="baec7959ee2d1c9532b754e896f320bdb3feb7e0859f858faea0917807e28192" Dec 04 16:07:05 crc kubenswrapper[4878]: I1204 16:07:05.493392 4878 scope.go:117] "RemoveContainer" containerID="1e35fa8a6f48b3ccfe8d9eb2116c66b67e963bcce7b0620b9fbb3188ca2b408b" Dec 04 16:07:05 crc kubenswrapper[4878]: I1204 16:07:05.521645 4878 scope.go:117] "RemoveContainer" containerID="661ef92c5c15b4dd2c10eeb20887ba2b4131d6fb0233af1502706010b6e1e512" Dec 04 16:07:05 crc kubenswrapper[4878]: I1204 16:07:05.579938 4878 scope.go:117] "RemoveContainer" containerID="d876f9782754cc3642fdb93d37a1d61635e48f65a76f6de526988b8bdbfabea7" Dec 04 16:07:07 crc kubenswrapper[4878]: I1204 16:07:07.187115 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:07:07 crc kubenswrapper[4878]: E1204 16:07:07.187728 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:07:14 crc kubenswrapper[4878]: I1204 16:07:14.691631 4878 generic.go:334] "Generic (PLEG): container finished" podID="56812292-222d-4323-86ad-30023b9862b0" containerID="23a76ecf87d3073db0c1d8dab81cf8e452491c5d4880835fde10e351f48d4901" exitCode=0 Dec 04 16:07:14 crc kubenswrapper[4878]: I1204 16:07:14.691742 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" event={"ID":"56812292-222d-4323-86ad-30023b9862b0","Type":"ContainerDied","Data":"23a76ecf87d3073db0c1d8dab81cf8e452491c5d4880835fde10e351f48d4901"} Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.141606 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.312792 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key\") pod \"56812292-222d-4323-86ad-30023b9862b0\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.313066 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory\") pod \"56812292-222d-4323-86ad-30023b9862b0\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.313168 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-754hh\" (UniqueName: \"kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh\") pod \"56812292-222d-4323-86ad-30023b9862b0\" (UID: \"56812292-222d-4323-86ad-30023b9862b0\") " Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.319229 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh" (OuterVolumeSpecName: "kube-api-access-754hh") pod "56812292-222d-4323-86ad-30023b9862b0" (UID: "56812292-222d-4323-86ad-30023b9862b0"). InnerVolumeSpecName "kube-api-access-754hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.342765 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory" (OuterVolumeSpecName: "inventory") pod "56812292-222d-4323-86ad-30023b9862b0" (UID: "56812292-222d-4323-86ad-30023b9862b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.344195 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "56812292-222d-4323-86ad-30023b9862b0" (UID: "56812292-222d-4323-86ad-30023b9862b0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.415614 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-754hh\" (UniqueName: \"kubernetes.io/projected/56812292-222d-4323-86ad-30023b9862b0-kube-api-access-754hh\") on node \"crc\" DevicePath \"\"" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.415656 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.415667 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/56812292-222d-4323-86ad-30023b9862b0-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.711969 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" event={"ID":"56812292-222d-4323-86ad-30023b9862b0","Type":"ContainerDied","Data":"9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de"} Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.712011 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9535178bd9a526e9856930788e2403abe95a0fbc27383fced600aed58ef4f1de" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.712018 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.797861 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj"] Dec 04 16:07:16 crc kubenswrapper[4878]: E1204 16:07:16.798437 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56812292-222d-4323-86ad-30023b9862b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.798464 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="56812292-222d-4323-86ad-30023b9862b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.798719 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="56812292-222d-4323-86ad-30023b9862b0" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.799620 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.804352 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.804559 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.804647 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.804763 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.808327 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj"] Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.926126 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.926224 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:16 crc kubenswrapper[4878]: I1204 16:07:16.926291 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgz2c\" (UniqueName: \"kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.028703 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.028768 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.028822 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgz2c\" (UniqueName: \"kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.032674 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.032940 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.054813 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgz2c\" (UniqueName: \"kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.118138 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.643752 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj"] Dec 04 16:07:17 crc kubenswrapper[4878]: I1204 16:07:17.723338 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" event={"ID":"992af669-26c3-4266-bf3d-023460cf30b3","Type":"ContainerStarted","Data":"f4ef63b398a3117746fe21fbe3002b1935862806b849796afc0bcea076f39d85"} Dec 04 16:07:18 crc kubenswrapper[4878]: I1204 16:07:18.733318 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" event={"ID":"992af669-26c3-4266-bf3d-023460cf30b3","Type":"ContainerStarted","Data":"2c70e98f1ae5bc581ef73e21c25f4d2039aefbbfdaba5ac23dd48729a553811a"} Dec 04 16:07:18 crc kubenswrapper[4878]: I1204 16:07:18.753487 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" podStartSLOduration=2.297788617 podStartE2EDuration="2.753465461s" podCreationTimestamp="2025-12-04 16:07:16 +0000 UTC" firstStartedPulling="2025-12-04 16:07:17.661808028 +0000 UTC m=+1881.624344984" lastFinishedPulling="2025-12-04 16:07:18.117484872 +0000 UTC m=+1882.080021828" observedRunningTime="2025-12-04 16:07:18.750711202 +0000 UTC m=+1882.713248158" watchObservedRunningTime="2025-12-04 16:07:18.753465461 +0000 UTC m=+1882.716002417" Dec 04 16:07:20 crc kubenswrapper[4878]: I1204 16:07:20.181111 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:07:20 crc kubenswrapper[4878]: E1204 16:07:20.181533 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:07:35 crc kubenswrapper[4878]: I1204 16:07:35.180479 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:07:35 crc kubenswrapper[4878]: E1204 16:07:35.181352 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.053751 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-hxdhg"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.067188 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-09fc-account-create-update-sf4xx"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.078903 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-09fc-account-create-update-sf4xx"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.089597 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-hxdhg"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.097250 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nb89k"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.105013 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nb89k"] Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.191426 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c679082-d66c-4280-bfab-15d1b6634db9" path="/var/lib/kubelet/pods/7c679082-d66c-4280-bfab-15d1b6634db9/volumes" Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.192217 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba4913b-3e30-4b9c-a404-50217b5f1657" path="/var/lib/kubelet/pods/bba4913b-3e30-4b9c-a404-50217b5f1657/volumes" Dec 04 16:07:43 crc kubenswrapper[4878]: I1204 16:07:43.192922 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0ab7e57-08d0-4697-bbf7-3abe045473b0" path="/var/lib/kubelet/pods/f0ab7e57-08d0-4697-bbf7-3abe045473b0/volumes" Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.113956 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ac85-account-create-update-8pm6w"] Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.125515 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-bea9-account-create-update-8rhwx"] Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.133674 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ac85-account-create-update-8pm6w"] Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.141306 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-bea9-account-create-update-8rhwx"] Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.149357 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-47pmx"] Dec 04 16:07:44 crc kubenswrapper[4878]: I1204 16:07:44.162909 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-47pmx"] Dec 04 16:07:45 crc kubenswrapper[4878]: I1204 16:07:45.194730 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78be2f68-3d21-4345-8544-3809d5dab436" path="/var/lib/kubelet/pods/78be2f68-3d21-4345-8544-3809d5dab436/volumes" Dec 04 16:07:45 crc kubenswrapper[4878]: I1204 16:07:45.195472 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba562765-65c2-4259-9373-38288bb120e3" path="/var/lib/kubelet/pods/ba562765-65c2-4259-9373-38288bb120e3/volumes" Dec 04 16:07:45 crc kubenswrapper[4878]: I1204 16:07:45.196147 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e426e3f3-3d26-4b49-873b-3a442b7de183" path="/var/lib/kubelet/pods/e426e3f3-3d26-4b49-873b-3a442b7de183/volumes" Dec 04 16:07:46 crc kubenswrapper[4878]: I1204 16:07:46.181438 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:07:46 crc kubenswrapper[4878]: E1204 16:07:46.182095 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:01 crc kubenswrapper[4878]: I1204 16:08:01.179563 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:08:01 crc kubenswrapper[4878]: E1204 16:08:01.180437 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.738850 4878 scope.go:117] "RemoveContainer" containerID="ab6d7187cbe96a4328879803fca06603addbe69aad047dfee5f1f63887dbc884" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.762501 4878 scope.go:117] "RemoveContainer" containerID="f10916f4fb1f8cfe21025398b248f1f35619a99235e5ade4c17d1381afe7390e" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.820199 4878 scope.go:117] "RemoveContainer" containerID="b8dfa64f57f80f7a21636d9ec8ef242095f550143bfd9e914b0b7e404d210130" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.882641 4878 scope.go:117] "RemoveContainer" containerID="54c2fc6cdefa5750264a0c0dcf964bbbee7b206d89c0af2020725d8e19659586" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.937722 4878 scope.go:117] "RemoveContainer" containerID="59993031e15363a5e8ff86880ba855bbb07481f07b4691b8444ace0c8e930fcf" Dec 04 16:08:05 crc kubenswrapper[4878]: I1204 16:08:05.977293 4878 scope.go:117] "RemoveContainer" containerID="84d02a87cb621416dda946cd5ec4fe76854b9e44ee5237dc3d51f348945c9f7f" Dec 04 16:08:13 crc kubenswrapper[4878]: I1204 16:08:13.180260 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:08:13 crc kubenswrapper[4878]: E1204 16:08:13.180917 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:19 crc kubenswrapper[4878]: I1204 16:08:19.041397 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lsw79"] Dec 04 16:08:19 crc kubenswrapper[4878]: I1204 16:08:19.050735 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-lsw79"] Dec 04 16:08:19 crc kubenswrapper[4878]: I1204 16:08:19.190917 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d91c912-ec19-4cf2-ade1-d8c9a9df95b5" path="/var/lib/kubelet/pods/1d91c912-ec19-4cf2-ade1-d8c9a9df95b5/volumes" Dec 04 16:08:25 crc kubenswrapper[4878]: I1204 16:08:25.179446 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:08:25 crc kubenswrapper[4878]: E1204 16:08:25.180115 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:37 crc kubenswrapper[4878]: I1204 16:08:37.186846 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:08:37 crc kubenswrapper[4878]: E1204 16:08:37.187813 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:39 crc kubenswrapper[4878]: I1204 16:08:39.485222 4878 generic.go:334] "Generic (PLEG): container finished" podID="992af669-26c3-4266-bf3d-023460cf30b3" containerID="2c70e98f1ae5bc581ef73e21c25f4d2039aefbbfdaba5ac23dd48729a553811a" exitCode=0 Dec 04 16:08:39 crc kubenswrapper[4878]: I1204 16:08:39.485293 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" event={"ID":"992af669-26c3-4266-bf3d-023460cf30b3","Type":"ContainerDied","Data":"2c70e98f1ae5bc581ef73e21c25f4d2039aefbbfdaba5ac23dd48729a553811a"} Dec 04 16:08:40 crc kubenswrapper[4878]: I1204 16:08:40.919394 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.027408 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory\") pod \"992af669-26c3-4266-bf3d-023460cf30b3\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.027743 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key\") pod \"992af669-26c3-4266-bf3d-023460cf30b3\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.028015 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgz2c\" (UniqueName: \"kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c\") pod \"992af669-26c3-4266-bf3d-023460cf30b3\" (UID: \"992af669-26c3-4266-bf3d-023460cf30b3\") " Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.034025 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c" (OuterVolumeSpecName: "kube-api-access-vgz2c") pod "992af669-26c3-4266-bf3d-023460cf30b3" (UID: "992af669-26c3-4266-bf3d-023460cf30b3"). InnerVolumeSpecName "kube-api-access-vgz2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.054609 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-hq8dz"] Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.060201 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "992af669-26c3-4266-bf3d-023460cf30b3" (UID: "992af669-26c3-4266-bf3d-023460cf30b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.066834 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-hq8dz"] Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.080183 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory" (OuterVolumeSpecName: "inventory") pod "992af669-26c3-4266-bf3d-023460cf30b3" (UID: "992af669-26c3-4266-bf3d-023460cf30b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.131447 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgz2c\" (UniqueName: \"kubernetes.io/projected/992af669-26c3-4266-bf3d-023460cf30b3-kube-api-access-vgz2c\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.131492 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.131502 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/992af669-26c3-4266-bf3d-023460cf30b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.192859 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e365e201-9030-4248-a6d6-0c250d3f3251" path="/var/lib/kubelet/pods/e365e201-9030-4248-a6d6-0c250d3f3251/volumes" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.508465 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" event={"ID":"992af669-26c3-4266-bf3d-023460cf30b3","Type":"ContainerDied","Data":"f4ef63b398a3117746fe21fbe3002b1935862806b849796afc0bcea076f39d85"} Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.508785 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4ef63b398a3117746fe21fbe3002b1935862806b849796afc0bcea076f39d85" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.508848 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.620317 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2"] Dec 04 16:08:41 crc kubenswrapper[4878]: E1204 16:08:41.626498 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992af669-26c3-4266-bf3d-023460cf30b3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.626536 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="992af669-26c3-4266-bf3d-023460cf30b3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.626856 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="992af669-26c3-4266-bf3d-023460cf30b3" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.627700 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.630476 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.631288 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.631603 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.631985 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.637266 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2"] Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.743788 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfrnf\" (UniqueName: \"kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.743947 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.744016 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.846337 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfrnf\" (UniqueName: \"kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.846402 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.846440 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.852121 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.855685 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.865091 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfrnf\" (UniqueName: \"kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:41 crc kubenswrapper[4878]: I1204 16:08:41.946229 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:42 crc kubenswrapper[4878]: W1204 16:08:42.458951 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6b0a783_a808_4e9d_a207_6a4c56b36cd9.slice/crio-84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad WatchSource:0}: Error finding container 84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad: Status 404 returned error can't find the container with id 84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad Dec 04 16:08:42 crc kubenswrapper[4878]: I1204 16:08:42.462264 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:08:42 crc kubenswrapper[4878]: I1204 16:08:42.463032 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2"] Dec 04 16:08:42 crc kubenswrapper[4878]: I1204 16:08:42.522776 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" event={"ID":"e6b0a783-a808-4e9d-a207-6a4c56b36cd9","Type":"ContainerStarted","Data":"84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad"} Dec 04 16:08:44 crc kubenswrapper[4878]: I1204 16:08:44.545747 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" event={"ID":"e6b0a783-a808-4e9d-a207-6a4c56b36cd9","Type":"ContainerStarted","Data":"37815a618a337ca51286db759c7652b3de79cc0c1332535b0b987c1098f0aa25"} Dec 04 16:08:44 crc kubenswrapper[4878]: I1204 16:08:44.572716 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" podStartSLOduration=2.561153162 podStartE2EDuration="3.57268624s" podCreationTimestamp="2025-12-04 16:08:41 +0000 UTC" firstStartedPulling="2025-12-04 16:08:42.461944498 +0000 UTC m=+1966.424481464" lastFinishedPulling="2025-12-04 16:08:43.473477586 +0000 UTC m=+1967.436014542" observedRunningTime="2025-12-04 16:08:44.566462873 +0000 UTC m=+1968.528999829" watchObservedRunningTime="2025-12-04 16:08:44.57268624 +0000 UTC m=+1968.535223196" Dec 04 16:08:45 crc kubenswrapper[4878]: I1204 16:08:45.041331 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j4t5x"] Dec 04 16:08:45 crc kubenswrapper[4878]: I1204 16:08:45.055336 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-j4t5x"] Dec 04 16:08:45 crc kubenswrapper[4878]: I1204 16:08:45.193049 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2166f9a6-f18a-4637-b089-5c87576d24d5" path="/var/lib/kubelet/pods/2166f9a6-f18a-4637-b089-5c87576d24d5/volumes" Dec 04 16:08:48 crc kubenswrapper[4878]: I1204 16:08:48.180008 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:08:48 crc kubenswrapper[4878]: E1204 16:08:48.180809 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:08:48 crc kubenswrapper[4878]: I1204 16:08:48.591200 4878 generic.go:334] "Generic (PLEG): container finished" podID="e6b0a783-a808-4e9d-a207-6a4c56b36cd9" containerID="37815a618a337ca51286db759c7652b3de79cc0c1332535b0b987c1098f0aa25" exitCode=0 Dec 04 16:08:48 crc kubenswrapper[4878]: I1204 16:08:48.591248 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" event={"ID":"e6b0a783-a808-4e9d-a207-6a4c56b36cd9","Type":"ContainerDied","Data":"37815a618a337ca51286db759c7652b3de79cc0c1332535b0b987c1098f0aa25"} Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.009187 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.134233 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory\") pod \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.134605 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfrnf\" (UniqueName: \"kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf\") pod \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.134783 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key\") pod \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\" (UID: \"e6b0a783-a808-4e9d-a207-6a4c56b36cd9\") " Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.142087 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf" (OuterVolumeSpecName: "kube-api-access-tfrnf") pod "e6b0a783-a808-4e9d-a207-6a4c56b36cd9" (UID: "e6b0a783-a808-4e9d-a207-6a4c56b36cd9"). InnerVolumeSpecName "kube-api-access-tfrnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.164796 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory" (OuterVolumeSpecName: "inventory") pod "e6b0a783-a808-4e9d-a207-6a4c56b36cd9" (UID: "e6b0a783-a808-4e9d-a207-6a4c56b36cd9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.166332 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e6b0a783-a808-4e9d-a207-6a4c56b36cd9" (UID: "e6b0a783-a808-4e9d-a207-6a4c56b36cd9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.237552 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.237605 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.237622 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfrnf\" (UniqueName: \"kubernetes.io/projected/e6b0a783-a808-4e9d-a207-6a4c56b36cd9-kube-api-access-tfrnf\") on node \"crc\" DevicePath \"\"" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.611459 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" event={"ID":"e6b0a783-a808-4e9d-a207-6a4c56b36cd9","Type":"ContainerDied","Data":"84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad"} Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.611503 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.611516 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84208d6b3841b71e321e22039a14fd8a5aa4f5397001da0aa0bffce149da54ad" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.682326 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt"] Dec 04 16:08:50 crc kubenswrapper[4878]: E1204 16:08:50.682785 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6b0a783-a808-4e9d-a207-6a4c56b36cd9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.682808 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6b0a783-a808-4e9d-a207-6a4c56b36cd9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.683061 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6b0a783-a808-4e9d-a207-6a4c56b36cd9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.683801 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.687407 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.687453 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.687494 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.687407 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.698125 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt"] Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.748458 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.748896 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5gsj\" (UniqueName: \"kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.748969 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.851071 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.851186 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5gsj\" (UniqueName: \"kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.851237 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.857678 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.857889 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:50 crc kubenswrapper[4878]: I1204 16:08:50.869108 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5gsj\" (UniqueName: \"kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-t2mdt\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:51 crc kubenswrapper[4878]: I1204 16:08:51.005823 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:08:51 crc kubenswrapper[4878]: I1204 16:08:51.588289 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt"] Dec 04 16:08:51 crc kubenswrapper[4878]: I1204 16:08:51.619771 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" event={"ID":"9e312b69-8ad2-408e-9303-bfec15db442e","Type":"ContainerStarted","Data":"47158f2d78dfb893ca884c0714a00aad33bcdc06a5d367d8fac1554c74030e20"} Dec 04 16:08:52 crc kubenswrapper[4878]: I1204 16:08:52.631055 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" event={"ID":"9e312b69-8ad2-408e-9303-bfec15db442e","Type":"ContainerStarted","Data":"a23f66594b5d01045a8895c5c863f367f718c8e2026a2a0c8d398e167e0b1ce7"} Dec 04 16:08:52 crc kubenswrapper[4878]: I1204 16:08:52.648445 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" podStartSLOduration=2.18550466 podStartE2EDuration="2.648424756s" podCreationTimestamp="2025-12-04 16:08:50 +0000 UTC" firstStartedPulling="2025-12-04 16:08:51.600222102 +0000 UTC m=+1975.562759058" lastFinishedPulling="2025-12-04 16:08:52.063142198 +0000 UTC m=+1976.025679154" observedRunningTime="2025-12-04 16:08:52.647831151 +0000 UTC m=+1976.610368107" watchObservedRunningTime="2025-12-04 16:08:52.648424756 +0000 UTC m=+1976.610961712" Dec 04 16:09:02 crc kubenswrapper[4878]: I1204 16:09:02.180372 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:09:02 crc kubenswrapper[4878]: I1204 16:09:02.733836 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235"} Dec 04 16:09:06 crc kubenswrapper[4878]: I1204 16:09:06.117698 4878 scope.go:117] "RemoveContainer" containerID="79a39fc4ccc7bd031ff1f8533c688719996126ebd4ac86f6d548bdad1754803b" Dec 04 16:09:06 crc kubenswrapper[4878]: I1204 16:09:06.178918 4878 scope.go:117] "RemoveContainer" containerID="84046239a736de12ea5542c886205af7998f4efa9ade1db88fedc2431972fde1" Dec 04 16:09:06 crc kubenswrapper[4878]: I1204 16:09:06.228016 4878 scope.go:117] "RemoveContainer" containerID="f62546df1a806b4affe0ec75a0cd217b50415d73d315dcdc9c72d0d10d3a53ab" Dec 04 16:09:26 crc kubenswrapper[4878]: I1204 16:09:26.045347 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-hh8f6"] Dec 04 16:09:26 crc kubenswrapper[4878]: I1204 16:09:26.055674 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-hh8f6"] Dec 04 16:09:27 crc kubenswrapper[4878]: I1204 16:09:27.191380 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a41a73-2e70-46ab-9608-523d804673b9" path="/var/lib/kubelet/pods/59a41a73-2e70-46ab-9608-523d804673b9/volumes" Dec 04 16:09:32 crc kubenswrapper[4878]: I1204 16:09:32.020910 4878 generic.go:334] "Generic (PLEG): container finished" podID="9e312b69-8ad2-408e-9303-bfec15db442e" containerID="a23f66594b5d01045a8895c5c863f367f718c8e2026a2a0c8d398e167e0b1ce7" exitCode=0 Dec 04 16:09:32 crc kubenswrapper[4878]: I1204 16:09:32.021013 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" event={"ID":"9e312b69-8ad2-408e-9303-bfec15db442e","Type":"ContainerDied","Data":"a23f66594b5d01045a8895c5c863f367f718c8e2026a2a0c8d398e167e0b1ce7"} Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.410391 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.471583 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key\") pod \"9e312b69-8ad2-408e-9303-bfec15db442e\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.471657 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5gsj\" (UniqueName: \"kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj\") pod \"9e312b69-8ad2-408e-9303-bfec15db442e\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.471917 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory\") pod \"9e312b69-8ad2-408e-9303-bfec15db442e\" (UID: \"9e312b69-8ad2-408e-9303-bfec15db442e\") " Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.478258 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj" (OuterVolumeSpecName: "kube-api-access-p5gsj") pod "9e312b69-8ad2-408e-9303-bfec15db442e" (UID: "9e312b69-8ad2-408e-9303-bfec15db442e"). InnerVolumeSpecName "kube-api-access-p5gsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.503929 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9e312b69-8ad2-408e-9303-bfec15db442e" (UID: "9e312b69-8ad2-408e-9303-bfec15db442e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.505651 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory" (OuterVolumeSpecName: "inventory") pod "9e312b69-8ad2-408e-9303-bfec15db442e" (UID: "9e312b69-8ad2-408e-9303-bfec15db442e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.574724 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.574774 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5gsj\" (UniqueName: \"kubernetes.io/projected/9e312b69-8ad2-408e-9303-bfec15db442e-kube-api-access-p5gsj\") on node \"crc\" DevicePath \"\"" Dec 04 16:09:33 crc kubenswrapper[4878]: I1204 16:09:33.574790 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9e312b69-8ad2-408e-9303-bfec15db442e-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.040287 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" event={"ID":"9e312b69-8ad2-408e-9303-bfec15db442e","Type":"ContainerDied","Data":"47158f2d78dfb893ca884c0714a00aad33bcdc06a5d367d8fac1554c74030e20"} Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.040338 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47158f2d78dfb893ca884c0714a00aad33bcdc06a5d367d8fac1554c74030e20" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.040359 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-t2mdt" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.139858 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245"] Dec 04 16:09:34 crc kubenswrapper[4878]: E1204 16:09:34.140437 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e312b69-8ad2-408e-9303-bfec15db442e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.140463 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e312b69-8ad2-408e-9303-bfec15db442e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.140711 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e312b69-8ad2-408e-9303-bfec15db442e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.141508 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.146277 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.146749 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.146777 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.147733 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.148612 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245"] Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.290410 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.290546 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.290585 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2zqn\" (UniqueName: \"kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.392167 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.392343 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.392381 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2zqn\" (UniqueName: \"kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.399405 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.399406 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.413466 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2zqn\" (UniqueName: \"kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-pp245\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:34 crc kubenswrapper[4878]: I1204 16:09:34.476484 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:09:35 crc kubenswrapper[4878]: I1204 16:09:35.065014 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245"] Dec 04 16:09:36 crc kubenswrapper[4878]: I1204 16:09:36.063106 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" event={"ID":"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3","Type":"ContainerStarted","Data":"e9954d1c5e1fcc531d15a18f3ee6dbfbe2f4ab6d22a17527162c11bbdf6aa580"} Dec 04 16:09:36 crc kubenswrapper[4878]: I1204 16:09:36.063419 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" event={"ID":"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3","Type":"ContainerStarted","Data":"62312693f66486518bbf2ff459b6d6c71164d61d7c71520dc94d85d9deb46bdb"} Dec 04 16:09:36 crc kubenswrapper[4878]: I1204 16:09:36.081770 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" podStartSLOduration=1.6288083 podStartE2EDuration="2.081750244s" podCreationTimestamp="2025-12-04 16:09:34 +0000 UTC" firstStartedPulling="2025-12-04 16:09:35.062244415 +0000 UTC m=+2019.024781371" lastFinishedPulling="2025-12-04 16:09:35.515186359 +0000 UTC m=+2019.477723315" observedRunningTime="2025-12-04 16:09:36.080616106 +0000 UTC m=+2020.043153062" watchObservedRunningTime="2025-12-04 16:09:36.081750244 +0000 UTC m=+2020.044287200" Dec 04 16:10:06 crc kubenswrapper[4878]: I1204 16:10:06.343419 4878 scope.go:117] "RemoveContainer" containerID="e193f9008f8e6a3781474bc471c383f4c23114a29ca653bf57a8475442f375b9" Dec 04 16:10:28 crc kubenswrapper[4878]: I1204 16:10:28.534493 4878 generic.go:334] "Generic (PLEG): container finished" podID="a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" containerID="e9954d1c5e1fcc531d15a18f3ee6dbfbe2f4ab6d22a17527162c11bbdf6aa580" exitCode=0 Dec 04 16:10:28 crc kubenswrapper[4878]: I1204 16:10:28.534567 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" event={"ID":"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3","Type":"ContainerDied","Data":"e9954d1c5e1fcc531d15a18f3ee6dbfbe2f4ab6d22a17527162c11bbdf6aa580"} Dec 04 16:10:29 crc kubenswrapper[4878]: I1204 16:10:29.972066 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.112103 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory\") pod \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.112200 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key\") pod \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.112259 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2zqn\" (UniqueName: \"kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn\") pod \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\" (UID: \"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3\") " Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.118882 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn" (OuterVolumeSpecName: "kube-api-access-d2zqn") pod "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" (UID: "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3"). InnerVolumeSpecName "kube-api-access-d2zqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.140887 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" (UID: "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.144054 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory" (OuterVolumeSpecName: "inventory") pod "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" (UID: "a0a7ed48-a6ca-45c3-9d33-2ebce62512b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.215172 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.215216 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.215231 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2zqn\" (UniqueName: \"kubernetes.io/projected/a0a7ed48-a6ca-45c3-9d33-2ebce62512b3-kube-api-access-d2zqn\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.556902 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" event={"ID":"a0a7ed48-a6ca-45c3-9d33-2ebce62512b3","Type":"ContainerDied","Data":"62312693f66486518bbf2ff459b6d6c71164d61d7c71520dc94d85d9deb46bdb"} Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.556956 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62312693f66486518bbf2ff459b6d6c71164d61d7c71520dc94d85d9deb46bdb" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.557007 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-pp245" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.833408 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jnbpb"] Dec 04 16:10:30 crc kubenswrapper[4878]: E1204 16:10:30.834187 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.834209 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.834443 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a7ed48-a6ca-45c3-9d33-2ebce62512b3" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.835342 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.840088 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.840992 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.841839 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.842593 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:10:30 crc kubenswrapper[4878]: I1204 16:10:30.843365 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jnbpb"] Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.032701 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srfwg\" (UniqueName: \"kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.032776 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.032848 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.135713 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.135918 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srfwg\" (UniqueName: \"kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.135966 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.144042 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.144576 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.157609 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srfwg\" (UniqueName: \"kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg\") pod \"ssh-known-hosts-edpm-deployment-jnbpb\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.161330 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:31 crc kubenswrapper[4878]: I1204 16:10:31.720076 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-jnbpb"] Dec 04 16:10:32 crc kubenswrapper[4878]: I1204 16:10:32.575797 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" event={"ID":"4831aa21-6bfc-415f-b6e1-53a350cf923b","Type":"ContainerStarted","Data":"392321219b33050718506f3c61833e2cd1a7027f341dfa9ba4b2167bb5a51556"} Dec 04 16:10:33 crc kubenswrapper[4878]: I1204 16:10:33.585756 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" event={"ID":"4831aa21-6bfc-415f-b6e1-53a350cf923b","Type":"ContainerStarted","Data":"11ab1a8ab209e9d97532f2937e5274f77cf9ce32625d87917c4024bcd6bac1ff"} Dec 04 16:10:33 crc kubenswrapper[4878]: I1204 16:10:33.651321 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" podStartSLOduration=2.9719912109999997 podStartE2EDuration="3.651300524s" podCreationTimestamp="2025-12-04 16:10:30 +0000 UTC" firstStartedPulling="2025-12-04 16:10:31.72481664 +0000 UTC m=+2075.687353596" lastFinishedPulling="2025-12-04 16:10:32.404125953 +0000 UTC m=+2076.366662909" observedRunningTime="2025-12-04 16:10:33.643979219 +0000 UTC m=+2077.606516175" watchObservedRunningTime="2025-12-04 16:10:33.651300524 +0000 UTC m=+2077.613837470" Dec 04 16:10:40 crc kubenswrapper[4878]: I1204 16:10:40.669203 4878 generic.go:334] "Generic (PLEG): container finished" podID="4831aa21-6bfc-415f-b6e1-53a350cf923b" containerID="11ab1a8ab209e9d97532f2937e5274f77cf9ce32625d87917c4024bcd6bac1ff" exitCode=0 Dec 04 16:10:40 crc kubenswrapper[4878]: I1204 16:10:40.669291 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" event={"ID":"4831aa21-6bfc-415f-b6e1-53a350cf923b","Type":"ContainerDied","Data":"11ab1a8ab209e9d97532f2937e5274f77cf9ce32625d87917c4024bcd6bac1ff"} Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.192715 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.274747 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0\") pod \"4831aa21-6bfc-415f-b6e1-53a350cf923b\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.275189 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srfwg\" (UniqueName: \"kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg\") pod \"4831aa21-6bfc-415f-b6e1-53a350cf923b\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.275314 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam\") pod \"4831aa21-6bfc-415f-b6e1-53a350cf923b\" (UID: \"4831aa21-6bfc-415f-b6e1-53a350cf923b\") " Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.282035 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg" (OuterVolumeSpecName: "kube-api-access-srfwg") pod "4831aa21-6bfc-415f-b6e1-53a350cf923b" (UID: "4831aa21-6bfc-415f-b6e1-53a350cf923b"). InnerVolumeSpecName "kube-api-access-srfwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.305224 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4831aa21-6bfc-415f-b6e1-53a350cf923b" (UID: "4831aa21-6bfc-415f-b6e1-53a350cf923b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.305607 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "4831aa21-6bfc-415f-b6e1-53a350cf923b" (UID: "4831aa21-6bfc-415f-b6e1-53a350cf923b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.376439 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.376486 4878 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/4831aa21-6bfc-415f-b6e1-53a350cf923b-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.376499 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srfwg\" (UniqueName: \"kubernetes.io/projected/4831aa21-6bfc-415f-b6e1-53a350cf923b-kube-api-access-srfwg\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.702810 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" event={"ID":"4831aa21-6bfc-415f-b6e1-53a350cf923b","Type":"ContainerDied","Data":"392321219b33050718506f3c61833e2cd1a7027f341dfa9ba4b2167bb5a51556"} Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.702857 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392321219b33050718506f3c61833e2cd1a7027f341dfa9ba4b2167bb5a51556" Dec 04 16:10:42 crc kubenswrapper[4878]: I1204 16:10:42.702930 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-jnbpb" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.016199 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw"] Dec 04 16:10:43 crc kubenswrapper[4878]: E1204 16:10:43.016762 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4831aa21-6bfc-415f-b6e1-53a350cf923b" containerName="ssh-known-hosts-edpm-deployment" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.016786 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4831aa21-6bfc-415f-b6e1-53a350cf923b" containerName="ssh-known-hosts-edpm-deployment" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.017074 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4831aa21-6bfc-415f-b6e1-53a350cf923b" containerName="ssh-known-hosts-edpm-deployment" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.017976 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.020793 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.020845 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.020886 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.022118 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.027278 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw"] Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.077136 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.077328 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.077521 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzhpt\" (UniqueName: \"kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.185990 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.186333 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.186399 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzhpt\" (UniqueName: \"kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.192036 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.197358 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.205694 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzhpt\" (UniqueName: \"kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-b6xnw\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.345695 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:43 crc kubenswrapper[4878]: I1204 16:10:43.859421 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw"] Dec 04 16:10:44 crc kubenswrapper[4878]: I1204 16:10:44.722239 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" event={"ID":"b35793af-eea9-4355-8bd0-8a7aec7b412a","Type":"ContainerStarted","Data":"f6cf2b488c1272b37e7344339fb5bcaa8c89193eb4bd293cbe50c4e16ab0a559"} Dec 04 16:10:44 crc kubenswrapper[4878]: I1204 16:10:44.722762 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" event={"ID":"b35793af-eea9-4355-8bd0-8a7aec7b412a","Type":"ContainerStarted","Data":"1dacdfc06e0626a94ad68d14c196cbe7ab98a7d7a5938fc2936dc79eb5ea39d0"} Dec 04 16:10:44 crc kubenswrapper[4878]: I1204 16:10:44.748634 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" podStartSLOduration=2.281825374 podStartE2EDuration="2.748609568s" podCreationTimestamp="2025-12-04 16:10:42 +0000 UTC" firstStartedPulling="2025-12-04 16:10:43.867046614 +0000 UTC m=+2087.829583570" lastFinishedPulling="2025-12-04 16:10:44.333830818 +0000 UTC m=+2088.296367764" observedRunningTime="2025-12-04 16:10:44.741783196 +0000 UTC m=+2088.704320162" watchObservedRunningTime="2025-12-04 16:10:44.748609568 +0000 UTC m=+2088.711146524" Dec 04 16:10:52 crc kubenswrapper[4878]: I1204 16:10:52.797406 4878 generic.go:334] "Generic (PLEG): container finished" podID="b35793af-eea9-4355-8bd0-8a7aec7b412a" containerID="f6cf2b488c1272b37e7344339fb5bcaa8c89193eb4bd293cbe50c4e16ab0a559" exitCode=0 Dec 04 16:10:52 crc kubenswrapper[4878]: I1204 16:10:52.797481 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" event={"ID":"b35793af-eea9-4355-8bd0-8a7aec7b412a","Type":"ContainerDied","Data":"f6cf2b488c1272b37e7344339fb5bcaa8c89193eb4bd293cbe50c4e16ab0a559"} Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.205790 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.257819 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzhpt\" (UniqueName: \"kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt\") pod \"b35793af-eea9-4355-8bd0-8a7aec7b412a\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.257907 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory\") pod \"b35793af-eea9-4355-8bd0-8a7aec7b412a\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.257940 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key\") pod \"b35793af-eea9-4355-8bd0-8a7aec7b412a\" (UID: \"b35793af-eea9-4355-8bd0-8a7aec7b412a\") " Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.265546 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt" (OuterVolumeSpecName: "kube-api-access-pzhpt") pod "b35793af-eea9-4355-8bd0-8a7aec7b412a" (UID: "b35793af-eea9-4355-8bd0-8a7aec7b412a"). InnerVolumeSpecName "kube-api-access-pzhpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.288935 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b35793af-eea9-4355-8bd0-8a7aec7b412a" (UID: "b35793af-eea9-4355-8bd0-8a7aec7b412a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.290786 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory" (OuterVolumeSpecName: "inventory") pod "b35793af-eea9-4355-8bd0-8a7aec7b412a" (UID: "b35793af-eea9-4355-8bd0-8a7aec7b412a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.360455 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzhpt\" (UniqueName: \"kubernetes.io/projected/b35793af-eea9-4355-8bd0-8a7aec7b412a-kube-api-access-pzhpt\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.360503 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.360517 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b35793af-eea9-4355-8bd0-8a7aec7b412a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.817892 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" event={"ID":"b35793af-eea9-4355-8bd0-8a7aec7b412a","Type":"ContainerDied","Data":"1dacdfc06e0626a94ad68d14c196cbe7ab98a7d7a5938fc2936dc79eb5ea39d0"} Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.817945 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-b6xnw" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.817951 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dacdfc06e0626a94ad68d14c196cbe7ab98a7d7a5938fc2936dc79eb5ea39d0" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.886988 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4"] Dec 04 16:10:54 crc kubenswrapper[4878]: E1204 16:10:54.887542 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35793af-eea9-4355-8bd0-8a7aec7b412a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.887568 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35793af-eea9-4355-8bd0-8a7aec7b412a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.887830 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35793af-eea9-4355-8bd0-8a7aec7b412a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.888758 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.892050 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.892290 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.892382 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.892794 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.910730 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4"] Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.973563 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.973734 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:54 crc kubenswrapper[4878]: I1204 16:10:54.973794 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcbr4\" (UniqueName: \"kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.076107 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.076260 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.076316 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcbr4\" (UniqueName: \"kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.079934 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.081813 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.095865 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcbr4\" (UniqueName: \"kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.282587 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.788295 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4"] Dec 04 16:10:55 crc kubenswrapper[4878]: I1204 16:10:55.828914 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" event={"ID":"633ccb62-7bfe-48dc-bd16-1a042f8d57f6","Type":"ContainerStarted","Data":"fd4cee7c96ab1410cbe5ab81d6231ddca1a9902f086814dc2d021aa10739824c"} Dec 04 16:10:57 crc kubenswrapper[4878]: I1204 16:10:57.902800 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" event={"ID":"633ccb62-7bfe-48dc-bd16-1a042f8d57f6","Type":"ContainerStarted","Data":"e7a3603b68e7e9f3abb98adb071a575caf2961f7948e1fbf62295511b6e77b2e"} Dec 04 16:10:57 crc kubenswrapper[4878]: I1204 16:10:57.928589 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" podStartSLOduration=3.04871177 podStartE2EDuration="3.928566442s" podCreationTimestamp="2025-12-04 16:10:54 +0000 UTC" firstStartedPulling="2025-12-04 16:10:55.798169034 +0000 UTC m=+2099.760705990" lastFinishedPulling="2025-12-04 16:10:56.678023706 +0000 UTC m=+2100.640560662" observedRunningTime="2025-12-04 16:10:57.918070297 +0000 UTC m=+2101.880607253" watchObservedRunningTime="2025-12-04 16:10:57.928566442 +0000 UTC m=+2101.891103398" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.485836 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.490340 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.503487 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.558097 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.558449 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxg8f\" (UniqueName: \"kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.558516 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.660265 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxg8f\" (UniqueName: \"kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.660364 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.660461 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.661088 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.661321 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.684072 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxg8f\" (UniqueName: \"kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f\") pod \"redhat-operators-q2xmx\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:05 crc kubenswrapper[4878]: I1204 16:11:05.837947 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:06 crc kubenswrapper[4878]: I1204 16:11:06.314491 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:07 crc kubenswrapper[4878]: I1204 16:11:07.028154 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerID="d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187" exitCode=0 Dec 04 16:11:07 crc kubenswrapper[4878]: I1204 16:11:07.028276 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerDied","Data":"d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187"} Dec 04 16:11:07 crc kubenswrapper[4878]: I1204 16:11:07.028649 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerStarted","Data":"97f34579bc9f2fd81bac38514248b8a754104d5546c98a2f1233d78dbb7a2deb"} Dec 04 16:11:07 crc kubenswrapper[4878]: I1204 16:11:07.031022 4878 generic.go:334] "Generic (PLEG): container finished" podID="633ccb62-7bfe-48dc-bd16-1a042f8d57f6" containerID="e7a3603b68e7e9f3abb98adb071a575caf2961f7948e1fbf62295511b6e77b2e" exitCode=0 Dec 04 16:11:07 crc kubenswrapper[4878]: I1204 16:11:07.031053 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" event={"ID":"633ccb62-7bfe-48dc-bd16-1a042f8d57f6","Type":"ContainerDied","Data":"e7a3603b68e7e9f3abb98adb071a575caf2961f7948e1fbf62295511b6e77b2e"} Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.044808 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerStarted","Data":"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1"} Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.532666 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.551793 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key\") pod \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.551919 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory\") pod \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.551993 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcbr4\" (UniqueName: \"kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4\") pod \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\" (UID: \"633ccb62-7bfe-48dc-bd16-1a042f8d57f6\") " Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.569078 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4" (OuterVolumeSpecName: "kube-api-access-mcbr4") pod "633ccb62-7bfe-48dc-bd16-1a042f8d57f6" (UID: "633ccb62-7bfe-48dc-bd16-1a042f8d57f6"). InnerVolumeSpecName "kube-api-access-mcbr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.604091 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "633ccb62-7bfe-48dc-bd16-1a042f8d57f6" (UID: "633ccb62-7bfe-48dc-bd16-1a042f8d57f6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.609035 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory" (OuterVolumeSpecName: "inventory") pod "633ccb62-7bfe-48dc-bd16-1a042f8d57f6" (UID: "633ccb62-7bfe-48dc-bd16-1a042f8d57f6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.653547 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.653581 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcbr4\" (UniqueName: \"kubernetes.io/projected/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-kube-api-access-mcbr4\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:08 crc kubenswrapper[4878]: I1204 16:11:08.653601 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/633ccb62-7bfe-48dc-bd16-1a042f8d57f6-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.058382 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" event={"ID":"633ccb62-7bfe-48dc-bd16-1a042f8d57f6","Type":"ContainerDied","Data":"fd4cee7c96ab1410cbe5ab81d6231ddca1a9902f086814dc2d021aa10739824c"} Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.058424 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.058434 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd4cee7c96ab1410cbe5ab81d6231ddca1a9902f086814dc2d021aa10739824c" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.189714 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn"] Dec 04 16:11:09 crc kubenswrapper[4878]: E1204 16:11:09.190356 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="633ccb62-7bfe-48dc-bd16-1a042f8d57f6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.190374 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="633ccb62-7bfe-48dc-bd16-1a042f8d57f6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.190608 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="633ccb62-7bfe-48dc-bd16-1a042f8d57f6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.192136 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.196668 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.196675 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.196758 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.196756 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.196823 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.197149 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.197259 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.197156 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.199783 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn"] Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.368479 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.368536 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.368584 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f2n\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.368941 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369067 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369161 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369335 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369456 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369532 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369604 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369652 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369672 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369850 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.369930 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.471792 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.471925 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.471960 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472001 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45f2n\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472081 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472123 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472146 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472182 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472209 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472247 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472278 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472307 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472326 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.472370 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.478015 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.478414 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.480350 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.481849 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.483543 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.484596 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.486273 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.486348 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.486526 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.486754 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.487182 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.488092 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.489569 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.491232 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45f2n\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:09 crc kubenswrapper[4878]: I1204 16:11:09.523845 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:10 crc kubenswrapper[4878]: I1204 16:11:10.047410 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn"] Dec 04 16:11:10 crc kubenswrapper[4878]: I1204 16:11:10.070636 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" event={"ID":"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94","Type":"ContainerStarted","Data":"6225c6df55986eb651ded3a9ff85252157d41b81c282e715fa336ff53cc8ac3a"} Dec 04 16:11:12 crc kubenswrapper[4878]: I1204 16:11:12.093107 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerID="13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1" exitCode=0 Dec 04 16:11:12 crc kubenswrapper[4878]: I1204 16:11:12.093246 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerDied","Data":"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1"} Dec 04 16:11:12 crc kubenswrapper[4878]: I1204 16:11:12.097274 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" event={"ID":"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94","Type":"ContainerStarted","Data":"bf3d211c2c135f1c04306d93edd697bc4a2499f5ca8b9777dd32aea7cfaac1b2"} Dec 04 16:11:12 crc kubenswrapper[4878]: I1204 16:11:12.136907 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" podStartSLOduration=1.810612508 podStartE2EDuration="3.136865509s" podCreationTimestamp="2025-12-04 16:11:09 +0000 UTC" firstStartedPulling="2025-12-04 16:11:10.058389303 +0000 UTC m=+2114.020926259" lastFinishedPulling="2025-12-04 16:11:11.384642304 +0000 UTC m=+2115.347179260" observedRunningTime="2025-12-04 16:11:12.13333174 +0000 UTC m=+2116.095868686" watchObservedRunningTime="2025-12-04 16:11:12.136865509 +0000 UTC m=+2116.099402465" Dec 04 16:11:13 crc kubenswrapper[4878]: I1204 16:11:13.113194 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerStarted","Data":"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427"} Dec 04 16:11:13 crc kubenswrapper[4878]: I1204 16:11:13.139178 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2xmx" podStartSLOduration=2.411932336 podStartE2EDuration="8.139155344s" podCreationTimestamp="2025-12-04 16:11:05 +0000 UTC" firstStartedPulling="2025-12-04 16:11:07.030278083 +0000 UTC m=+2110.992815039" lastFinishedPulling="2025-12-04 16:11:12.757501091 +0000 UTC m=+2116.720038047" observedRunningTime="2025-12-04 16:11:13.133200904 +0000 UTC m=+2117.095737860" watchObservedRunningTime="2025-12-04 16:11:13.139155344 +0000 UTC m=+2117.101692300" Dec 04 16:11:15 crc kubenswrapper[4878]: I1204 16:11:15.839446 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:15 crc kubenswrapper[4878]: I1204 16:11:15.840048 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:16 crc kubenswrapper[4878]: I1204 16:11:16.892350 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-q2xmx" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="registry-server" probeResult="failure" output=< Dec 04 16:11:16 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:11:16 crc kubenswrapper[4878]: > Dec 04 16:11:25 crc kubenswrapper[4878]: I1204 16:11:25.895458 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:25 crc kubenswrapper[4878]: I1204 16:11:25.950489 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:26 crc kubenswrapper[4878]: I1204 16:11:26.149905 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.243138 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2xmx" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="registry-server" containerID="cri-o://c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427" gracePeriod=2 Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.753119 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.874471 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities\") pod \"5fe30f06-544d-43ab-a69a-952a0526de8c\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.874678 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content\") pod \"5fe30f06-544d-43ab-a69a-952a0526de8c\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.874735 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxg8f\" (UniqueName: \"kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f\") pod \"5fe30f06-544d-43ab-a69a-952a0526de8c\" (UID: \"5fe30f06-544d-43ab-a69a-952a0526de8c\") " Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.875225 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities" (OuterVolumeSpecName: "utilities") pod "5fe30f06-544d-43ab-a69a-952a0526de8c" (UID: "5fe30f06-544d-43ab-a69a-952a0526de8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.875954 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.880034 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f" (OuterVolumeSpecName: "kube-api-access-bxg8f") pod "5fe30f06-544d-43ab-a69a-952a0526de8c" (UID: "5fe30f06-544d-43ab-a69a-952a0526de8c"). InnerVolumeSpecName "kube-api-access-bxg8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.978368 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxg8f\" (UniqueName: \"kubernetes.io/projected/5fe30f06-544d-43ab-a69a-952a0526de8c-kube-api-access-bxg8f\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:27 crc kubenswrapper[4878]: I1204 16:11:27.995752 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fe30f06-544d-43ab-a69a-952a0526de8c" (UID: "5fe30f06-544d-43ab-a69a-952a0526de8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.080385 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fe30f06-544d-43ab-a69a-952a0526de8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.257992 4878 generic.go:334] "Generic (PLEG): container finished" podID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerID="c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427" exitCode=0 Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.258045 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerDied","Data":"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427"} Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.258078 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2xmx" event={"ID":"5fe30f06-544d-43ab-a69a-952a0526de8c","Type":"ContainerDied","Data":"97f34579bc9f2fd81bac38514248b8a754104d5546c98a2f1233d78dbb7a2deb"} Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.258096 4878 scope.go:117] "RemoveContainer" containerID="c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.258131 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2xmx" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.295308 4878 scope.go:117] "RemoveContainer" containerID="13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.296704 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.305408 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2xmx"] Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.328195 4878 scope.go:117] "RemoveContainer" containerID="d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.388343 4878 scope.go:117] "RemoveContainer" containerID="c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427" Dec 04 16:11:28 crc kubenswrapper[4878]: E1204 16:11:28.388844 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427\": container with ID starting with c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427 not found: ID does not exist" containerID="c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.388905 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427"} err="failed to get container status \"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427\": rpc error: code = NotFound desc = could not find container \"c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427\": container with ID starting with c9296450740f24ef2ebda5730ee690b82f1529c57656dc38f0f4f55a73f67427 not found: ID does not exist" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.388935 4878 scope.go:117] "RemoveContainer" containerID="13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1" Dec 04 16:11:28 crc kubenswrapper[4878]: E1204 16:11:28.389331 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1\": container with ID starting with 13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1 not found: ID does not exist" containerID="13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.389368 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1"} err="failed to get container status \"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1\": rpc error: code = NotFound desc = could not find container \"13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1\": container with ID starting with 13c7bb0564eec9b4dfb9d3b9a4d3eb5fe1ed676d91cddff5a9d40115d92370a1 not found: ID does not exist" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.389394 4878 scope.go:117] "RemoveContainer" containerID="d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187" Dec 04 16:11:28 crc kubenswrapper[4878]: E1204 16:11:28.389709 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187\": container with ID starting with d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187 not found: ID does not exist" containerID="d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187" Dec 04 16:11:28 crc kubenswrapper[4878]: I1204 16:11:28.389737 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187"} err="failed to get container status \"d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187\": rpc error: code = NotFound desc = could not find container \"d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187\": container with ID starting with d869ad353920cb47e10d98042e341cab42949ab4d0bc5171ac954ba518267187 not found: ID does not exist" Dec 04 16:11:29 crc kubenswrapper[4878]: I1204 16:11:29.192929 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" path="/var/lib/kubelet/pods/5fe30f06-544d-43ab-a69a-952a0526de8c/volumes" Dec 04 16:11:30 crc kubenswrapper[4878]: I1204 16:11:30.840162 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:11:30 crc kubenswrapper[4878]: I1204 16:11:30.840523 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:11:51 crc kubenswrapper[4878]: I1204 16:11:51.504406 4878 generic.go:334] "Generic (PLEG): container finished" podID="7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" containerID="bf3d211c2c135f1c04306d93edd697bc4a2499f5ca8b9777dd32aea7cfaac1b2" exitCode=0 Dec 04 16:11:51 crc kubenswrapper[4878]: I1204 16:11:51.504505 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" event={"ID":"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94","Type":"ContainerDied","Data":"bf3d211c2c135f1c04306d93edd697bc4a2499f5ca8b9777dd32aea7cfaac1b2"} Dec 04 16:11:52 crc kubenswrapper[4878]: I1204 16:11:52.997924 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.138530 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.138808 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.138856 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.138917 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.138950 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.139087 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.139126 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.139242 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.139939 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45f2n\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.139987 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.140032 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.140066 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.140096 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.140128 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle\") pod \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\" (UID: \"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94\") " Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.146158 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.147261 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n" (OuterVolumeSpecName: "kube-api-access-45f2n") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "kube-api-access-45f2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.148057 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.148916 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.148975 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.149153 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.149641 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.151010 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.151757 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.152475 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.152781 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.154738 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.176620 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory" (OuterVolumeSpecName: "inventory") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.177860 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" (UID: "7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.242975 4878 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243026 4878 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243044 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243057 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243073 4878 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243087 4878 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243099 4878 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243114 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243129 4878 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243145 4878 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243160 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243175 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243188 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.243200 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45f2n\" (UniqueName: \"kubernetes.io/projected/7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94-kube-api-access-45f2n\") on node \"crc\" DevicePath \"\"" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.528576 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" event={"ID":"7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94","Type":"ContainerDied","Data":"6225c6df55986eb651ded3a9ff85252157d41b81c282e715fa336ff53cc8ac3a"} Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.528639 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6225c6df55986eb651ded3a9ff85252157d41b81c282e715fa336ff53cc8ac3a" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.528648 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.809541 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd"] Dec 04 16:11:53 crc kubenswrapper[4878]: E1204 16:11:53.810058 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="extract-utilities" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810077 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="extract-utilities" Dec 04 16:11:53 crc kubenswrapper[4878]: E1204 16:11:53.810092 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="extract-content" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810098 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="extract-content" Dec 04 16:11:53 crc kubenswrapper[4878]: E1204 16:11:53.810123 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810130 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:53 crc kubenswrapper[4878]: E1204 16:11:53.810147 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="registry-server" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810153 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="registry-server" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810352 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.810367 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fe30f06-544d-43ab-a69a-952a0526de8c" containerName="registry-server" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.811093 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.813076 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.830859 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.831197 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.831648 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.834491 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.853396 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd"] Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.965912 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.965998 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.966157 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.966264 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:53 crc kubenswrapper[4878]: I1204 16:11:53.966407 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np8g2\" (UniqueName: \"kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.068271 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.068614 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.068671 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np8g2\" (UniqueName: \"kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.068766 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.068813 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.070806 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.072579 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.072816 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.073092 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.089772 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np8g2\" (UniqueName: \"kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dqnbd\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.131588 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:11:54 crc kubenswrapper[4878]: I1204 16:11:54.677152 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd"] Dec 04 16:11:55 crc kubenswrapper[4878]: I1204 16:11:55.554554 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" event={"ID":"c4743038-ff21-4107-8e3c-d576536e0c3c","Type":"ContainerStarted","Data":"2d40f2a8a1927fc256a16b6e5690ccd70786349acd7b0dcbb6ad25279ed63076"} Dec 04 16:11:56 crc kubenswrapper[4878]: I1204 16:11:56.567617 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" event={"ID":"c4743038-ff21-4107-8e3c-d576536e0c3c","Type":"ContainerStarted","Data":"ec745aa45f6e53f175ea7f47aedb1dd3b80b5082925c101468447da6998956ff"} Dec 04 16:11:56 crc kubenswrapper[4878]: I1204 16:11:56.590914 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" podStartSLOduration=2.78447128 podStartE2EDuration="3.590861155s" podCreationTimestamp="2025-12-04 16:11:53 +0000 UTC" firstStartedPulling="2025-12-04 16:11:54.697920446 +0000 UTC m=+2158.660457392" lastFinishedPulling="2025-12-04 16:11:55.504310311 +0000 UTC m=+2159.466847267" observedRunningTime="2025-12-04 16:11:56.585385857 +0000 UTC m=+2160.547922833" watchObservedRunningTime="2025-12-04 16:11:56.590861155 +0000 UTC m=+2160.553398111" Dec 04 16:12:00 crc kubenswrapper[4878]: I1204 16:12:00.840896 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:12:00 crc kubenswrapper[4878]: I1204 16:12:00.841501 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:12:30 crc kubenswrapper[4878]: I1204 16:12:30.840685 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:12:30 crc kubenswrapper[4878]: I1204 16:12:30.841331 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:12:30 crc kubenswrapper[4878]: I1204 16:12:30.841403 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:12:30 crc kubenswrapper[4878]: I1204 16:12:30.842318 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:12:30 crc kubenswrapper[4878]: I1204 16:12:30.842381 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235" gracePeriod=600 Dec 04 16:12:31 crc kubenswrapper[4878]: I1204 16:12:31.908267 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235" exitCode=0 Dec 04 16:12:31 crc kubenswrapper[4878]: I1204 16:12:31.908355 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235"} Dec 04 16:12:31 crc kubenswrapper[4878]: I1204 16:12:31.909329 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d"} Dec 04 16:12:31 crc kubenswrapper[4878]: I1204 16:12:31.909372 4878 scope.go:117] "RemoveContainer" containerID="870cb1b4a0f463752cf93003a1485f52448c667e118c449ce6f7cc4932a38f46" Dec 04 16:13:03 crc kubenswrapper[4878]: I1204 16:13:03.200516 4878 generic.go:334] "Generic (PLEG): container finished" podID="c4743038-ff21-4107-8e3c-d576536e0c3c" containerID="ec745aa45f6e53f175ea7f47aedb1dd3b80b5082925c101468447da6998956ff" exitCode=0 Dec 04 16:13:03 crc kubenswrapper[4878]: I1204 16:13:03.200614 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" event={"ID":"c4743038-ff21-4107-8e3c-d576536e0c3c","Type":"ContainerDied","Data":"ec745aa45f6e53f175ea7f47aedb1dd3b80b5082925c101468447da6998956ff"} Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.671322 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.785343 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np8g2\" (UniqueName: \"kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2\") pod \"c4743038-ff21-4107-8e3c-d576536e0c3c\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.785747 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key\") pod \"c4743038-ff21-4107-8e3c-d576536e0c3c\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.785851 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle\") pod \"c4743038-ff21-4107-8e3c-d576536e0c3c\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.785903 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory\") pod \"c4743038-ff21-4107-8e3c-d576536e0c3c\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.785947 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0\") pod \"c4743038-ff21-4107-8e3c-d576536e0c3c\" (UID: \"c4743038-ff21-4107-8e3c-d576536e0c3c\") " Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.792674 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2" (OuterVolumeSpecName: "kube-api-access-np8g2") pod "c4743038-ff21-4107-8e3c-d576536e0c3c" (UID: "c4743038-ff21-4107-8e3c-d576536e0c3c"). InnerVolumeSpecName "kube-api-access-np8g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.792724 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "c4743038-ff21-4107-8e3c-d576536e0c3c" (UID: "c4743038-ff21-4107-8e3c-d576536e0c3c"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.818187 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory" (OuterVolumeSpecName: "inventory") pod "c4743038-ff21-4107-8e3c-d576536e0c3c" (UID: "c4743038-ff21-4107-8e3c-d576536e0c3c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.818183 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "c4743038-ff21-4107-8e3c-d576536e0c3c" (UID: "c4743038-ff21-4107-8e3c-d576536e0c3c"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.820462 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4743038-ff21-4107-8e3c-d576536e0c3c" (UID: "c4743038-ff21-4107-8e3c-d576536e0c3c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.889637 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-np8g2\" (UniqueName: \"kubernetes.io/projected/c4743038-ff21-4107-8e3c-d576536e0c3c-kube-api-access-np8g2\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.889689 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.889701 4878 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.889714 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4743038-ff21-4107-8e3c-d576536e0c3c-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:04 crc kubenswrapper[4878]: I1204 16:13:04.889727 4878 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/c4743038-ff21-4107-8e3c-d576536e0c3c-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.219615 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" event={"ID":"c4743038-ff21-4107-8e3c-d576536e0c3c","Type":"ContainerDied","Data":"2d40f2a8a1927fc256a16b6e5690ccd70786349acd7b0dcbb6ad25279ed63076"} Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.219671 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d40f2a8a1927fc256a16b6e5690ccd70786349acd7b0dcbb6ad25279ed63076" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.219687 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dqnbd" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.330603 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts"] Dec 04 16:13:05 crc kubenswrapper[4878]: E1204 16:13:05.331232 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4743038-ff21-4107-8e3c-d576536e0c3c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.331254 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4743038-ff21-4107-8e3c-d576536e0c3c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.331472 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4743038-ff21-4107-8e3c-d576536e0c3c" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.332328 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.334083 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.335098 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.335200 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.335356 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.335528 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.335852 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.341367 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts"] Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.399776 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.400172 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.400451 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.400652 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.400736 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72hp\" (UniqueName: \"kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.400893 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.502999 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.503068 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.503129 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.503166 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72hp\" (UniqueName: \"kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.503252 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.503346 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.506810 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.507648 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.508545 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.508797 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.514493 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.530323 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72hp\" (UniqueName: \"kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:05 crc kubenswrapper[4878]: I1204 16:13:05.660752 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:06 crc kubenswrapper[4878]: I1204 16:13:06.194950 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts"] Dec 04 16:13:06 crc kubenswrapper[4878]: I1204 16:13:06.232159 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" event={"ID":"96e5fe1c-6d27-40bd-aea8-b89c718d54c0","Type":"ContainerStarted","Data":"77d7335082bf526e7b48f511c876f07e3d9986673283d564f9c560d571fe2330"} Dec 04 16:13:07 crc kubenswrapper[4878]: I1204 16:13:07.242278 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" event={"ID":"96e5fe1c-6d27-40bd-aea8-b89c718d54c0","Type":"ContainerStarted","Data":"6b16fab61486559e0ce8c0db4d409eb4a7cef4cd404a11f21150f68f7355e0bc"} Dec 04 16:13:07 crc kubenswrapper[4878]: I1204 16:13:07.273001 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" podStartSLOduration=1.8142921589999998 podStartE2EDuration="2.27297936s" podCreationTimestamp="2025-12-04 16:13:05 +0000 UTC" firstStartedPulling="2025-12-04 16:13:06.198995612 +0000 UTC m=+2230.161532568" lastFinishedPulling="2025-12-04 16:13:06.657682813 +0000 UTC m=+2230.620219769" observedRunningTime="2025-12-04 16:13:07.26269765 +0000 UTC m=+2231.225234606" watchObservedRunningTime="2025-12-04 16:13:07.27297936 +0000 UTC m=+2231.235516336" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.417731 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.421396 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.441489 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.482274 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.482632 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49hx\" (UniqueName: \"kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.482905 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.583987 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.584059 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.584145 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49hx\" (UniqueName: \"kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.584736 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.585049 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.616615 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49hx\" (UniqueName: \"kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx\") pod \"certified-operators-llsv6\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:27 crc kubenswrapper[4878]: I1204 16:13:27.754976 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:28 crc kubenswrapper[4878]: I1204 16:13:28.274060 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:13:28 crc kubenswrapper[4878]: I1204 16:13:28.470631 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerStarted","Data":"c5526dafe58e0f0a8d1b311d56fb5a4d42069248449dc8d0017f272810435487"} Dec 04 16:13:29 crc kubenswrapper[4878]: I1204 16:13:29.482982 4878 generic.go:334] "Generic (PLEG): container finished" podID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerID="4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca" exitCode=0 Dec 04 16:13:29 crc kubenswrapper[4878]: I1204 16:13:29.483164 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerDied","Data":"4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca"} Dec 04 16:13:33 crc kubenswrapper[4878]: I1204 16:13:33.525329 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerStarted","Data":"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f"} Dec 04 16:13:34 crc kubenswrapper[4878]: I1204 16:13:34.538199 4878 generic.go:334] "Generic (PLEG): container finished" podID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerID="520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f" exitCode=0 Dec 04 16:13:34 crc kubenswrapper[4878]: I1204 16:13:34.538267 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerDied","Data":"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f"} Dec 04 16:13:35 crc kubenswrapper[4878]: I1204 16:13:35.552596 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerStarted","Data":"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9"} Dec 04 16:13:35 crc kubenswrapper[4878]: I1204 16:13:35.572966 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llsv6" podStartSLOduration=3.105295208 podStartE2EDuration="8.572921536s" podCreationTimestamp="2025-12-04 16:13:27 +0000 UTC" firstStartedPulling="2025-12-04 16:13:29.486027701 +0000 UTC m=+2253.448564657" lastFinishedPulling="2025-12-04 16:13:34.953654039 +0000 UTC m=+2258.916190985" observedRunningTime="2025-12-04 16:13:35.572110835 +0000 UTC m=+2259.534647801" watchObservedRunningTime="2025-12-04 16:13:35.572921536 +0000 UTC m=+2259.535458492" Dec 04 16:13:37 crc kubenswrapper[4878]: I1204 16:13:37.755197 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:37 crc kubenswrapper[4878]: I1204 16:13:37.755667 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:37 crc kubenswrapper[4878]: I1204 16:13:37.811258 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.443562 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.446614 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.459302 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.558551 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.559227 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc5l2\" (UniqueName: \"kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.559346 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.661203 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.661623 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc5l2\" (UniqueName: \"kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.661757 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.661762 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.662113 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.700970 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc5l2\" (UniqueName: \"kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2\") pod \"redhat-marketplace-v7h7h\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:44 crc kubenswrapper[4878]: I1204 16:13:44.773896 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:45 crc kubenswrapper[4878]: I1204 16:13:45.250763 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:45 crc kubenswrapper[4878]: W1204 16:13:45.256138 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6303e60a_2a0f_40d7_9eac_53fb3ba08e5e.slice/crio-8c807836c87470c34294fc73ee9751c7b2e9438901752a89001a8c087c3cf066 WatchSource:0}: Error finding container 8c807836c87470c34294fc73ee9751c7b2e9438901752a89001a8c087c3cf066: Status 404 returned error can't find the container with id 8c807836c87470c34294fc73ee9751c7b2e9438901752a89001a8c087c3cf066 Dec 04 16:13:45 crc kubenswrapper[4878]: I1204 16:13:45.660523 4878 generic.go:334] "Generic (PLEG): container finished" podID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerID="2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2" exitCode=0 Dec 04 16:13:45 crc kubenswrapper[4878]: I1204 16:13:45.660570 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerDied","Data":"2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2"} Dec 04 16:13:45 crc kubenswrapper[4878]: I1204 16:13:45.660604 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerStarted","Data":"8c807836c87470c34294fc73ee9751c7b2e9438901752a89001a8c087c3cf066"} Dec 04 16:13:45 crc kubenswrapper[4878]: I1204 16:13:45.664520 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:13:47 crc kubenswrapper[4878]: I1204 16:13:47.821423 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:13:48 crc kubenswrapper[4878]: I1204 16:13:48.641048 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:13:48 crc kubenswrapper[4878]: I1204 16:13:48.696275 4878 generic.go:334] "Generic (PLEG): container finished" podID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerID="7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6" exitCode=0 Dec 04 16:13:48 crc kubenswrapper[4878]: I1204 16:13:48.696368 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerDied","Data":"7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6"} Dec 04 16:13:48 crc kubenswrapper[4878]: I1204 16:13:48.821458 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 16:13:48 crc kubenswrapper[4878]: I1204 16:13:48.821838 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kr2f4" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="registry-server" containerID="cri-o://c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e" gracePeriod=2 Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.404264 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.466214 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content\") pod \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.466574 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities\") pod \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.466765 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmlvb\" (UniqueName: \"kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb\") pod \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\" (UID: \"8dd6f90d-5a20-4f66-98e1-3e59edb42928\") " Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.467345 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities" (OuterVolumeSpecName: "utilities") pod "8dd6f90d-5a20-4f66-98e1-3e59edb42928" (UID: "8dd6f90d-5a20-4f66-98e1-3e59edb42928"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.476698 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb" (OuterVolumeSpecName: "kube-api-access-pmlvb") pod "8dd6f90d-5a20-4f66-98e1-3e59edb42928" (UID: "8dd6f90d-5a20-4f66-98e1-3e59edb42928"). InnerVolumeSpecName "kube-api-access-pmlvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.519476 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8dd6f90d-5a20-4f66-98e1-3e59edb42928" (UID: "8dd6f90d-5a20-4f66-98e1-3e59edb42928"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.571324 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmlvb\" (UniqueName: \"kubernetes.io/projected/8dd6f90d-5a20-4f66-98e1-3e59edb42928-kube-api-access-pmlvb\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.571373 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.571384 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8dd6f90d-5a20-4f66-98e1-3e59edb42928-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.714791 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerStarted","Data":"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742"} Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.719333 4878 generic.go:334] "Generic (PLEG): container finished" podID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerID="c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e" exitCode=0 Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.719387 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerDied","Data":"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e"} Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.719421 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kr2f4" event={"ID":"8dd6f90d-5a20-4f66-98e1-3e59edb42928","Type":"ContainerDied","Data":"9352001d7e129cd6f14a866dd5620f48429091ae029440e2200e5ff497e2fef6"} Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.719449 4878 scope.go:117] "RemoveContainer" containerID="c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.719779 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kr2f4" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.743951 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v7h7h" podStartSLOduration=2.211337533 podStartE2EDuration="5.74392581s" podCreationTimestamp="2025-12-04 16:13:44 +0000 UTC" firstStartedPulling="2025-12-04 16:13:45.664205689 +0000 UTC m=+2269.626742655" lastFinishedPulling="2025-12-04 16:13:49.196793976 +0000 UTC m=+2273.159330932" observedRunningTime="2025-12-04 16:13:49.737035016 +0000 UTC m=+2273.699571972" watchObservedRunningTime="2025-12-04 16:13:49.74392581 +0000 UTC m=+2273.706462766" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.774110 4878 scope.go:117] "RemoveContainer" containerID="095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.783412 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.798354 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kr2f4"] Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.801759 4878 scope.go:117] "RemoveContainer" containerID="965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.860001 4878 scope.go:117] "RemoveContainer" containerID="c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e" Dec 04 16:13:49 crc kubenswrapper[4878]: E1204 16:13:49.861850 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e\": container with ID starting with c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e not found: ID does not exist" containerID="c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.861921 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e"} err="failed to get container status \"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e\": rpc error: code = NotFound desc = could not find container \"c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e\": container with ID starting with c227f6cae947b507f5b93ee2af736f48bbb8e21fa45bdff017c2c06f9d7de32e not found: ID does not exist" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.861962 4878 scope.go:117] "RemoveContainer" containerID="095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543" Dec 04 16:13:49 crc kubenswrapper[4878]: E1204 16:13:49.862690 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543\": container with ID starting with 095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543 not found: ID does not exist" containerID="095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.862747 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543"} err="failed to get container status \"095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543\": rpc error: code = NotFound desc = could not find container \"095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543\": container with ID starting with 095cf79de51e5f174230585b6327000b2d204822372cd7fa25ebd55b27d44543 not found: ID does not exist" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.862969 4878 scope.go:117] "RemoveContainer" containerID="965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243" Dec 04 16:13:49 crc kubenswrapper[4878]: E1204 16:13:49.863751 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243\": container with ID starting with 965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243 not found: ID does not exist" containerID="965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243" Dec 04 16:13:49 crc kubenswrapper[4878]: I1204 16:13:49.863778 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243"} err="failed to get container status \"965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243\": rpc error: code = NotFound desc = could not find container \"965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243\": container with ID starting with 965c8e65fa64828dbe2beb954843b86d9be17351afc0f3a6976f3199a4287243 not found: ID does not exist" Dec 04 16:13:51 crc kubenswrapper[4878]: I1204 16:13:51.190963 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" path="/var/lib/kubelet/pods/8dd6f90d-5a20-4f66-98e1-3e59edb42928/volumes" Dec 04 16:13:54 crc kubenswrapper[4878]: I1204 16:13:54.774158 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:54 crc kubenswrapper[4878]: I1204 16:13:54.774900 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:54 crc kubenswrapper[4878]: I1204 16:13:54.830021 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:55 crc kubenswrapper[4878]: I1204 16:13:55.825836 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:56 crc kubenswrapper[4878]: I1204 16:13:56.790265 4878 generic.go:334] "Generic (PLEG): container finished" podID="96e5fe1c-6d27-40bd-aea8-b89c718d54c0" containerID="6b16fab61486559e0ce8c0db4d409eb4a7cef4cd404a11f21150f68f7355e0bc" exitCode=0 Dec 04 16:13:56 crc kubenswrapper[4878]: I1204 16:13:56.790347 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" event={"ID":"96e5fe1c-6d27-40bd-aea8-b89c718d54c0","Type":"ContainerDied","Data":"6b16fab61486559e0ce8c0db4d409eb4a7cef4cd404a11f21150f68f7355e0bc"} Dec 04 16:13:57 crc kubenswrapper[4878]: I1204 16:13:57.017394 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.223034 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.251997 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.252670 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.252851 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.253136 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.253278 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.253410 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q72hp\" (UniqueName: \"kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp\") pod \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\" (UID: \"96e5fe1c-6d27-40bd-aea8-b89c718d54c0\") " Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.302081 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.302166 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp" (OuterVolumeSpecName: "kube-api-access-q72hp") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "kube-api-access-q72hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.308430 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.314011 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.315068 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory" (OuterVolumeSpecName: "inventory") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.321678 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "96e5fe1c-6d27-40bd-aea8-b89c718d54c0" (UID: "96e5fe1c-6d27-40bd-aea8-b89c718d54c0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368644 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368693 4878 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368713 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368728 4878 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368741 4878 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.368757 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q72hp\" (UniqueName: \"kubernetes.io/projected/96e5fe1c-6d27-40bd-aea8-b89c718d54c0-kube-api-access-q72hp\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.815120 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v7h7h" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="registry-server" containerID="cri-o://f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742" gracePeriod=2 Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.816007 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.816141 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts" event={"ID":"96e5fe1c-6d27-40bd-aea8-b89c718d54c0","Type":"ContainerDied","Data":"77d7335082bf526e7b48f511c876f07e3d9986673283d564f9c560d571fe2330"} Dec 04 16:13:58 crc kubenswrapper[4878]: I1204 16:13:58.816252 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77d7335082bf526e7b48f511c876f07e3d9986673283d564f9c560d571fe2330" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.013494 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55"] Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.014287 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="extract-utilities" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.014418 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="extract-utilities" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.014493 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="registry-server" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.014569 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="registry-server" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.014667 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="extract-content" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.014744 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="extract-content" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.014825 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e5fe1c-6d27-40bd-aea8-b89c718d54c0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.014938 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e5fe1c-6d27-40bd-aea8-b89c718d54c0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.015426 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e5fe1c-6d27-40bd-aea8-b89c718d54c0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.015530 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dd6f90d-5a20-4f66-98e1-3e59edb42928" containerName="registry-server" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.016453 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.019191 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.019408 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.019709 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.020058 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.020170 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.027334 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55"] Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.083578 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.083647 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.083920 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.084224 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.084338 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxcbr\" (UniqueName: \"kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.186888 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.186948 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.187005 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.187061 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.187095 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxcbr\" (UniqueName: \"kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.193353 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.198503 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.199347 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.203364 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.209060 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxcbr\" (UniqueName: \"kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-k2n55\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.305820 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.352850 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.390788 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities\") pod \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.390925 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content\") pod \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.391038 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc5l2\" (UniqueName: \"kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2\") pod \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\" (UID: \"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e\") " Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.391952 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities" (OuterVolumeSpecName: "utilities") pod "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" (UID: "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.397395 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2" (OuterVolumeSpecName: "kube-api-access-hc5l2") pod "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" (UID: "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e"). InnerVolumeSpecName "kube-api-access-hc5l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.412821 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" (UID: "6303e60a-2a0f-40d7-9eac-53fb3ba08e5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.493619 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc5l2\" (UniqueName: \"kubernetes.io/projected/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-kube-api-access-hc5l2\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.493657 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.493669 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.829203 4878 generic.go:334] "Generic (PLEG): container finished" podID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerID="f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742" exitCode=0 Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.829358 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerDied","Data":"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742"} Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.829545 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v7h7h" event={"ID":"6303e60a-2a0f-40d7-9eac-53fb3ba08e5e","Type":"ContainerDied","Data":"8c807836c87470c34294fc73ee9751c7b2e9438901752a89001a8c087c3cf066"} Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.829573 4878 scope.go:117] "RemoveContainer" containerID="f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.829374 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v7h7h" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.862620 4878 scope.go:117] "RemoveContainer" containerID="7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.884704 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55"] Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.893932 4878 scope.go:117] "RemoveContainer" containerID="2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.900064 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.910170 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v7h7h"] Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.939173 4878 scope.go:117] "RemoveContainer" containerID="f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.939677 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742\": container with ID starting with f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742 not found: ID does not exist" containerID="f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.939722 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742"} err="failed to get container status \"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742\": rpc error: code = NotFound desc = could not find container \"f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742\": container with ID starting with f293679c03249b622c6142ca5da77555ddc5055ef2dddbb3f9e16a049fbd8742 not found: ID does not exist" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.939745 4878 scope.go:117] "RemoveContainer" containerID="7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.940123 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6\": container with ID starting with 7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6 not found: ID does not exist" containerID="7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.940157 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6"} err="failed to get container status \"7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6\": rpc error: code = NotFound desc = could not find container \"7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6\": container with ID starting with 7cb38f3d8d2d396f5f46a4cab5195285f5c55b0be006b65ae3f3ba46930625a6 not found: ID does not exist" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.940172 4878 scope.go:117] "RemoveContainer" containerID="2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2" Dec 04 16:13:59 crc kubenswrapper[4878]: E1204 16:13:59.940407 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2\": container with ID starting with 2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2 not found: ID does not exist" containerID="2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2" Dec 04 16:13:59 crc kubenswrapper[4878]: I1204 16:13:59.940430 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2"} err="failed to get container status \"2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2\": rpc error: code = NotFound desc = could not find container \"2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2\": container with ID starting with 2b8b03fd6d454a28709bf71178b0f772b8bcca12b957d2f2ba56e30cd23a6aa2 not found: ID does not exist" Dec 04 16:14:00 crc kubenswrapper[4878]: I1204 16:14:00.842177 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" event={"ID":"7db5ad3f-e745-4eca-92d8-290800fe6115","Type":"ContainerStarted","Data":"1579c01b1a68616acc334ac48da7c371ef241e1bbda8334c5e4add9499764e1c"} Dec 04 16:14:00 crc kubenswrapper[4878]: I1204 16:14:00.842560 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" event={"ID":"7db5ad3f-e745-4eca-92d8-290800fe6115","Type":"ContainerStarted","Data":"b1fa10910188876a1f10dc215f02e6d51952fd2dfea9be979a741b06cd11e578"} Dec 04 16:14:00 crc kubenswrapper[4878]: I1204 16:14:00.863552 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" podStartSLOduration=2.4480287179999998 podStartE2EDuration="2.863527966s" podCreationTimestamp="2025-12-04 16:13:58 +0000 UTC" firstStartedPulling="2025-12-04 16:13:59.894392119 +0000 UTC m=+2283.856929075" lastFinishedPulling="2025-12-04 16:14:00.309891367 +0000 UTC m=+2284.272428323" observedRunningTime="2025-12-04 16:14:00.860849558 +0000 UTC m=+2284.823386524" watchObservedRunningTime="2025-12-04 16:14:00.863527966 +0000 UTC m=+2284.826064922" Dec 04 16:14:01 crc kubenswrapper[4878]: I1204 16:14:01.201370 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" path="/var/lib/kubelet/pods/6303e60a-2a0f-40d7-9eac-53fb3ba08e5e/volumes" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.745650 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:35 crc kubenswrapper[4878]: E1204 16:14:35.746780 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="extract-utilities" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.746795 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="extract-utilities" Dec 04 16:14:35 crc kubenswrapper[4878]: E1204 16:14:35.746803 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="extract-content" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.746809 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="extract-content" Dec 04 16:14:35 crc kubenswrapper[4878]: E1204 16:14:35.746850 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="registry-server" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.746857 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="registry-server" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.747120 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="6303e60a-2a0f-40d7-9eac-53fb3ba08e5e" containerName="registry-server" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.748649 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.760251 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.760590 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.760755 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9psn\" (UniqueName: \"kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.763915 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.862989 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.863359 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9psn\" (UniqueName: \"kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.863470 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.863657 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.864029 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:35 crc kubenswrapper[4878]: I1204 16:14:35.884482 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9psn\" (UniqueName: \"kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn\") pod \"community-operators-hxhgr\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:36 crc kubenswrapper[4878]: I1204 16:14:36.071574 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:36 crc kubenswrapper[4878]: I1204 16:14:36.600371 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:37 crc kubenswrapper[4878]: I1204 16:14:37.180846 4878 generic.go:334] "Generic (PLEG): container finished" podID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerID="5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a" exitCode=0 Dec 04 16:14:37 crc kubenswrapper[4878]: I1204 16:14:37.190205 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerDied","Data":"5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a"} Dec 04 16:14:37 crc kubenswrapper[4878]: I1204 16:14:37.190254 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerStarted","Data":"1d7ffb4c43e0cbb6ba397ec86647a3191b584063602a2f97215129f7455f5b98"} Dec 04 16:14:39 crc kubenswrapper[4878]: I1204 16:14:39.202284 4878 generic.go:334] "Generic (PLEG): container finished" podID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerID="8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551" exitCode=0 Dec 04 16:14:39 crc kubenswrapper[4878]: I1204 16:14:39.202341 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerDied","Data":"8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551"} Dec 04 16:14:40 crc kubenswrapper[4878]: I1204 16:14:40.217985 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerStarted","Data":"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35"} Dec 04 16:14:40 crc kubenswrapper[4878]: I1204 16:14:40.240671 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hxhgr" podStartSLOduration=2.822712054 podStartE2EDuration="5.240646217s" podCreationTimestamp="2025-12-04 16:14:35 +0000 UTC" firstStartedPulling="2025-12-04 16:14:37.192256454 +0000 UTC m=+2321.154793410" lastFinishedPulling="2025-12-04 16:14:39.610190617 +0000 UTC m=+2323.572727573" observedRunningTime="2025-12-04 16:14:40.235531848 +0000 UTC m=+2324.198068804" watchObservedRunningTime="2025-12-04 16:14:40.240646217 +0000 UTC m=+2324.203183173" Dec 04 16:14:46 crc kubenswrapper[4878]: I1204 16:14:46.072595 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:46 crc kubenswrapper[4878]: I1204 16:14:46.073342 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:46 crc kubenswrapper[4878]: I1204 16:14:46.132618 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:46 crc kubenswrapper[4878]: I1204 16:14:46.348444 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:46 crc kubenswrapper[4878]: I1204 16:14:46.434648 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.308738 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hxhgr" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="registry-server" containerID="cri-o://f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35" gracePeriod=2 Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.874641 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.885206 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content\") pod \"c59880a4-b02f-4aac-bfe1-44353944f69e\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.885405 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9psn\" (UniqueName: \"kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn\") pod \"c59880a4-b02f-4aac-bfe1-44353944f69e\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.885452 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities\") pod \"c59880a4-b02f-4aac-bfe1-44353944f69e\" (UID: \"c59880a4-b02f-4aac-bfe1-44353944f69e\") " Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.886279 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities" (OuterVolumeSpecName: "utilities") pod "c59880a4-b02f-4aac-bfe1-44353944f69e" (UID: "c59880a4-b02f-4aac-bfe1-44353944f69e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.891281 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn" (OuterVolumeSpecName: "kube-api-access-t9psn") pod "c59880a4-b02f-4aac-bfe1-44353944f69e" (UID: "c59880a4-b02f-4aac-bfe1-44353944f69e"). InnerVolumeSpecName "kube-api-access-t9psn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.950211 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c59880a4-b02f-4aac-bfe1-44353944f69e" (UID: "c59880a4-b02f-4aac-bfe1-44353944f69e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.986728 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.986789 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9psn\" (UniqueName: \"kubernetes.io/projected/c59880a4-b02f-4aac-bfe1-44353944f69e-kube-api-access-t9psn\") on node \"crc\" DevicePath \"\"" Dec 04 16:14:48 crc kubenswrapper[4878]: I1204 16:14:48.986808 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c59880a4-b02f-4aac-bfe1-44353944f69e-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.320161 4878 generic.go:334] "Generic (PLEG): container finished" podID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerID="f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35" exitCode=0 Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.320210 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerDied","Data":"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35"} Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.320245 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hxhgr" event={"ID":"c59880a4-b02f-4aac-bfe1-44353944f69e","Type":"ContainerDied","Data":"1d7ffb4c43e0cbb6ba397ec86647a3191b584063602a2f97215129f7455f5b98"} Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.320246 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hxhgr" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.320263 4878 scope.go:117] "RemoveContainer" containerID="f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.343683 4878 scope.go:117] "RemoveContainer" containerID="8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.349226 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.362921 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hxhgr"] Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.375461 4878 scope.go:117] "RemoveContainer" containerID="5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.410253 4878 scope.go:117] "RemoveContainer" containerID="f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35" Dec 04 16:14:49 crc kubenswrapper[4878]: E1204 16:14:49.410704 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35\": container with ID starting with f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35 not found: ID does not exist" containerID="f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.410747 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35"} err="failed to get container status \"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35\": rpc error: code = NotFound desc = could not find container \"f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35\": container with ID starting with f8a82b1f2e94c28c506c3b09d8fc1d3f206f8cb314a2c09743dbea5db0a2bf35 not found: ID does not exist" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.410773 4878 scope.go:117] "RemoveContainer" containerID="8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551" Dec 04 16:14:49 crc kubenswrapper[4878]: E1204 16:14:49.411065 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551\": container with ID starting with 8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551 not found: ID does not exist" containerID="8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.411102 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551"} err="failed to get container status \"8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551\": rpc error: code = NotFound desc = could not find container \"8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551\": container with ID starting with 8e695c3ce2479077d745be8431ccef96121c1cb0f6f32e5589d23a86a9e4d551 not found: ID does not exist" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.411124 4878 scope.go:117] "RemoveContainer" containerID="5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a" Dec 04 16:14:49 crc kubenswrapper[4878]: E1204 16:14:49.411368 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a\": container with ID starting with 5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a not found: ID does not exist" containerID="5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a" Dec 04 16:14:49 crc kubenswrapper[4878]: I1204 16:14:49.411396 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a"} err="failed to get container status \"5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a\": rpc error: code = NotFound desc = could not find container \"5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a\": container with ID starting with 5dfea0d66b1adf3ad3dedb936e0eb76fbb2f475c540e088af4c2c8d3b5f20a7a not found: ID does not exist" Dec 04 16:14:51 crc kubenswrapper[4878]: I1204 16:14:51.190460 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" path="/var/lib/kubelet/pods/c59880a4-b02f-4aac-bfe1-44353944f69e/volumes" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.168124 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b"] Dec 04 16:15:00 crc kubenswrapper[4878]: E1204 16:15:00.169129 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="extract-utilities" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.169150 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="extract-utilities" Dec 04 16:15:00 crc kubenswrapper[4878]: E1204 16:15:00.169181 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="registry-server" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.169191 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="registry-server" Dec 04 16:15:00 crc kubenswrapper[4878]: E1204 16:15:00.169212 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="extract-content" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.169220 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="extract-content" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.169450 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59880a4-b02f-4aac-bfe1-44353944f69e" containerName="registry-server" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.172248 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.190841 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.191714 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.212597 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b"] Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.260350 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.260402 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsf92\" (UniqueName: \"kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.260706 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.363129 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsf92\" (UniqueName: \"kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.363408 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.363628 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.365072 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.381120 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.387534 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsf92\" (UniqueName: \"kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92\") pod \"collect-profiles-29414415-dvq9b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.517529 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.840428 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.840783 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:15:00 crc kubenswrapper[4878]: I1204 16:15:00.973847 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b"] Dec 04 16:15:01 crc kubenswrapper[4878]: I1204 16:15:01.448229 4878 generic.go:334] "Generic (PLEG): container finished" podID="17cee0ef-82f8-4608-b27f-277dab45d20b" containerID="76cb5d7083aa72f7eb3127c9013d5805d11d2c3740bbc863735c2d2c664b9525" exitCode=0 Dec 04 16:15:01 crc kubenswrapper[4878]: I1204 16:15:01.448295 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" event={"ID":"17cee0ef-82f8-4608-b27f-277dab45d20b","Type":"ContainerDied","Data":"76cb5d7083aa72f7eb3127c9013d5805d11d2c3740bbc863735c2d2c664b9525"} Dec 04 16:15:01 crc kubenswrapper[4878]: I1204 16:15:01.448382 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" event={"ID":"17cee0ef-82f8-4608-b27f-277dab45d20b","Type":"ContainerStarted","Data":"74c1352c9181892cb09b88114ca9f48b33f1e875c2133b285bb7ea81df658aca"} Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.792572 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.818810 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsf92\" (UniqueName: \"kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92\") pod \"17cee0ef-82f8-4608-b27f-277dab45d20b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.818960 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume\") pod \"17cee0ef-82f8-4608-b27f-277dab45d20b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.819020 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume\") pod \"17cee0ef-82f8-4608-b27f-277dab45d20b\" (UID: \"17cee0ef-82f8-4608-b27f-277dab45d20b\") " Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.819642 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume" (OuterVolumeSpecName: "config-volume") pod "17cee0ef-82f8-4608-b27f-277dab45d20b" (UID: "17cee0ef-82f8-4608-b27f-277dab45d20b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.824820 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "17cee0ef-82f8-4608-b27f-277dab45d20b" (UID: "17cee0ef-82f8-4608-b27f-277dab45d20b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.824881 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92" (OuterVolumeSpecName: "kube-api-access-rsf92") pod "17cee0ef-82f8-4608-b27f-277dab45d20b" (UID: "17cee0ef-82f8-4608-b27f-277dab45d20b"). InnerVolumeSpecName "kube-api-access-rsf92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.920926 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/17cee0ef-82f8-4608-b27f-277dab45d20b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.921029 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsf92\" (UniqueName: \"kubernetes.io/projected/17cee0ef-82f8-4608-b27f-277dab45d20b-kube-api-access-rsf92\") on node \"crc\" DevicePath \"\"" Dec 04 16:15:02 crc kubenswrapper[4878]: I1204 16:15:02.921039 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17cee0ef-82f8-4608-b27f-277dab45d20b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:15:03 crc kubenswrapper[4878]: I1204 16:15:03.472062 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" event={"ID":"17cee0ef-82f8-4608-b27f-277dab45d20b","Type":"ContainerDied","Data":"74c1352c9181892cb09b88114ca9f48b33f1e875c2133b285bb7ea81df658aca"} Dec 04 16:15:03 crc kubenswrapper[4878]: I1204 16:15:03.472126 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414415-dvq9b" Dec 04 16:15:03 crc kubenswrapper[4878]: I1204 16:15:03.472132 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c1352c9181892cb09b88114ca9f48b33f1e875c2133b285bb7ea81df658aca" Dec 04 16:15:03 crc kubenswrapper[4878]: I1204 16:15:03.869934 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9"] Dec 04 16:15:03 crc kubenswrapper[4878]: I1204 16:15:03.880260 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414370-pphl9"] Dec 04 16:15:05 crc kubenswrapper[4878]: I1204 16:15:05.191821 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e43204-f248-4d01-a9a8-9c264008e2fb" path="/var/lib/kubelet/pods/19e43204-f248-4d01-a9a8-9c264008e2fb/volumes" Dec 04 16:15:06 crc kubenswrapper[4878]: I1204 16:15:06.574493 4878 scope.go:117] "RemoveContainer" containerID="2be8d263d74396cd579ec25b6078d659548818ef985a10cf8eb441d9d7ae2978" Dec 04 16:15:30 crc kubenswrapper[4878]: I1204 16:15:30.840040 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:15:30 crc kubenswrapper[4878]: I1204 16:15:30.840628 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.840952 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.841566 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.841619 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.842512 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.842563 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" gracePeriod=600 Dec 04 16:16:00 crc kubenswrapper[4878]: E1204 16:16:00.966186 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.977835 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" exitCode=0 Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.977892 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d"} Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.977970 4878 scope.go:117] "RemoveContainer" containerID="8c603aa3422bb6021c46f5cf27e373633b467d44f95efffde184705345610235" Dec 04 16:16:00 crc kubenswrapper[4878]: I1204 16:16:00.978772 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:16:00 crc kubenswrapper[4878]: E1204 16:16:00.979461 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:16:16 crc kubenswrapper[4878]: I1204 16:16:16.180315 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:16:16 crc kubenswrapper[4878]: E1204 16:16:16.181312 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:16:28 crc kubenswrapper[4878]: I1204 16:16:28.180859 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:16:28 crc kubenswrapper[4878]: E1204 16:16:28.181640 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:16:41 crc kubenswrapper[4878]: I1204 16:16:41.179820 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:16:41 crc kubenswrapper[4878]: E1204 16:16:41.180544 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:16:53 crc kubenswrapper[4878]: I1204 16:16:53.179443 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:16:53 crc kubenswrapper[4878]: E1204 16:16:53.180269 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:17:05 crc kubenswrapper[4878]: I1204 16:17:05.179814 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:17:05 crc kubenswrapper[4878]: E1204 16:17:05.180604 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:17:18 crc kubenswrapper[4878]: I1204 16:17:18.179737 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:17:18 crc kubenswrapper[4878]: E1204 16:17:18.181383 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:17:29 crc kubenswrapper[4878]: I1204 16:17:29.179829 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:17:29 crc kubenswrapper[4878]: E1204 16:17:29.180661 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:17:43 crc kubenswrapper[4878]: I1204 16:17:43.180439 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:17:43 crc kubenswrapper[4878]: E1204 16:17:43.181936 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:17:54 crc kubenswrapper[4878]: I1204 16:17:54.179293 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:17:54 crc kubenswrapper[4878]: E1204 16:17:54.179943 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:18:06 crc kubenswrapper[4878]: I1204 16:18:06.181135 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:18:06 crc kubenswrapper[4878]: E1204 16:18:06.181989 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:18:18 crc kubenswrapper[4878]: I1204 16:18:18.180520 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:18:18 crc kubenswrapper[4878]: E1204 16:18:18.181492 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:18:27 crc kubenswrapper[4878]: I1204 16:18:27.327769 4878 generic.go:334] "Generic (PLEG): container finished" podID="7db5ad3f-e745-4eca-92d8-290800fe6115" containerID="1579c01b1a68616acc334ac48da7c371ef241e1bbda8334c5e4add9499764e1c" exitCode=0 Dec 04 16:18:27 crc kubenswrapper[4878]: I1204 16:18:27.327824 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" event={"ID":"7db5ad3f-e745-4eca-92d8-290800fe6115","Type":"ContainerDied","Data":"1579c01b1a68616acc334ac48da7c371ef241e1bbda8334c5e4add9499764e1c"} Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.745615 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.836551 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key\") pod \"7db5ad3f-e745-4eca-92d8-290800fe6115\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.836686 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0\") pod \"7db5ad3f-e745-4eca-92d8-290800fe6115\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.836808 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle\") pod \"7db5ad3f-e745-4eca-92d8-290800fe6115\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.836845 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxcbr\" (UniqueName: \"kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr\") pod \"7db5ad3f-e745-4eca-92d8-290800fe6115\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.836995 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory\") pod \"7db5ad3f-e745-4eca-92d8-290800fe6115\" (UID: \"7db5ad3f-e745-4eca-92d8-290800fe6115\") " Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.842567 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr" (OuterVolumeSpecName: "kube-api-access-gxcbr") pod "7db5ad3f-e745-4eca-92d8-290800fe6115" (UID: "7db5ad3f-e745-4eca-92d8-290800fe6115"). InnerVolumeSpecName "kube-api-access-gxcbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.846119 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "7db5ad3f-e745-4eca-92d8-290800fe6115" (UID: "7db5ad3f-e745-4eca-92d8-290800fe6115"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.865565 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory" (OuterVolumeSpecName: "inventory") pod "7db5ad3f-e745-4eca-92d8-290800fe6115" (UID: "7db5ad3f-e745-4eca-92d8-290800fe6115"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.865996 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "7db5ad3f-e745-4eca-92d8-290800fe6115" (UID: "7db5ad3f-e745-4eca-92d8-290800fe6115"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.867974 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7db5ad3f-e745-4eca-92d8-290800fe6115" (UID: "7db5ad3f-e745-4eca-92d8-290800fe6115"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.939532 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.939570 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.939579 4878 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.939592 4878 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db5ad3f-e745-4eca-92d8-290800fe6115-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:18:28 crc kubenswrapper[4878]: I1204 16:18:28.939605 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxcbr\" (UniqueName: \"kubernetes.io/projected/7db5ad3f-e745-4eca-92d8-290800fe6115-kube-api-access-gxcbr\") on node \"crc\" DevicePath \"\"" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.357243 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" event={"ID":"7db5ad3f-e745-4eca-92d8-290800fe6115","Type":"ContainerDied","Data":"b1fa10910188876a1f10dc215f02e6d51952fd2dfea9be979a741b06cd11e578"} Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.357303 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1fa10910188876a1f10dc215f02e6d51952fd2dfea9be979a741b06cd11e578" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.357589 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-k2n55" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.444280 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q"] Dec 04 16:18:29 crc kubenswrapper[4878]: E1204 16:18:29.445781 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17cee0ef-82f8-4608-b27f-277dab45d20b" containerName="collect-profiles" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.445810 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="17cee0ef-82f8-4608-b27f-277dab45d20b" containerName="collect-profiles" Dec 04 16:18:29 crc kubenswrapper[4878]: E1204 16:18:29.445842 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db5ad3f-e745-4eca-92d8-290800fe6115" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.445859 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db5ad3f-e745-4eca-92d8-290800fe6115" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.446100 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="17cee0ef-82f8-4608-b27f-277dab45d20b" containerName="collect-profiles" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.446131 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db5ad3f-e745-4eca-92d8-290800fe6115" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.446845 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.450756 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.450941 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.450770 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.451213 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.451346 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.451454 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.451573 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.469664 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q"] Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.551487 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.551535 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.551564 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.551854 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.552122 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bsmx\" (UniqueName: \"kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.552251 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.552420 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.552675 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.552776 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655485 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655546 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bsmx\" (UniqueName: \"kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655573 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655614 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655658 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.655734 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.656224 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.656256 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.656289 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.657402 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.660254 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.660342 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.660595 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.661405 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.661673 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.662006 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.663141 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.674717 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bsmx\" (UniqueName: \"kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx\") pod \"nova-edpm-deployment-openstack-edpm-ipam-p9k2q\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:29 crc kubenswrapper[4878]: I1204 16:18:29.769954 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:18:30 crc kubenswrapper[4878]: I1204 16:18:30.295363 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q"] Dec 04 16:18:30 crc kubenswrapper[4878]: I1204 16:18:30.372080 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" event={"ID":"c5c443b7-778f-46ba-9ec4-312767ec3a27","Type":"ContainerStarted","Data":"d2182a61c032c9d1405e45f724afe709aee41a0dc369f654f24f6381d396cb1a"} Dec 04 16:18:31 crc kubenswrapper[4878]: I1204 16:18:31.388434 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" event={"ID":"c5c443b7-778f-46ba-9ec4-312767ec3a27","Type":"ContainerStarted","Data":"c7a328218baca21643944074ff674297407677228008f82a09df9523ada07975"} Dec 04 16:18:31 crc kubenswrapper[4878]: I1204 16:18:31.413835 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" podStartSLOduration=1.9867448699999999 podStartE2EDuration="2.41380146s" podCreationTimestamp="2025-12-04 16:18:29 +0000 UTC" firstStartedPulling="2025-12-04 16:18:30.300530291 +0000 UTC m=+2554.263067247" lastFinishedPulling="2025-12-04 16:18:30.727586881 +0000 UTC m=+2554.690123837" observedRunningTime="2025-12-04 16:18:31.409996764 +0000 UTC m=+2555.372533730" watchObservedRunningTime="2025-12-04 16:18:31.41380146 +0000 UTC m=+2555.376338416" Dec 04 16:18:32 crc kubenswrapper[4878]: I1204 16:18:32.180017 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:18:32 crc kubenswrapper[4878]: E1204 16:18:32.180326 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:18:47 crc kubenswrapper[4878]: I1204 16:18:47.186781 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:18:47 crc kubenswrapper[4878]: E1204 16:18:47.188223 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:18:59 crc kubenswrapper[4878]: I1204 16:18:59.180454 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:18:59 crc kubenswrapper[4878]: E1204 16:18:59.183457 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:19:13 crc kubenswrapper[4878]: I1204 16:19:13.180861 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:19:13 crc kubenswrapper[4878]: E1204 16:19:13.181720 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:19:28 crc kubenswrapper[4878]: I1204 16:19:28.246936 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:19:28 crc kubenswrapper[4878]: E1204 16:19:28.247954 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:19:40 crc kubenswrapper[4878]: I1204 16:19:40.181208 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:19:40 crc kubenswrapper[4878]: E1204 16:19:40.181934 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:19:51 crc kubenswrapper[4878]: I1204 16:19:51.179329 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:19:51 crc kubenswrapper[4878]: E1204 16:19:51.180229 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:20:05 crc kubenswrapper[4878]: I1204 16:20:05.181425 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:20:05 crc kubenswrapper[4878]: E1204 16:20:05.183426 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:20:16 crc kubenswrapper[4878]: I1204 16:20:16.181114 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:20:16 crc kubenswrapper[4878]: E1204 16:20:16.182087 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:20:30 crc kubenswrapper[4878]: I1204 16:20:30.180021 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:20:30 crc kubenswrapper[4878]: E1204 16:20:30.180802 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:20:41 crc kubenswrapper[4878]: I1204 16:20:41.179360 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:20:41 crc kubenswrapper[4878]: E1204 16:20:41.180134 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:20:55 crc kubenswrapper[4878]: I1204 16:20:55.180347 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:20:55 crc kubenswrapper[4878]: E1204 16:20:55.181136 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:21:10 crc kubenswrapper[4878]: I1204 16:21:10.180329 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:21:10 crc kubenswrapper[4878]: I1204 16:21:10.466262 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44"} Dec 04 16:21:21 crc kubenswrapper[4878]: I1204 16:21:21.601457 4878 generic.go:334] "Generic (PLEG): container finished" podID="c5c443b7-778f-46ba-9ec4-312767ec3a27" containerID="c7a328218baca21643944074ff674297407677228008f82a09df9523ada07975" exitCode=0 Dec 04 16:21:21 crc kubenswrapper[4878]: I1204 16:21:21.601556 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" event={"ID":"c5c443b7-778f-46ba-9ec4-312767ec3a27","Type":"ContainerDied","Data":"c7a328218baca21643944074ff674297407677228008f82a09df9523ada07975"} Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.020968 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.210797 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.210945 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.211439 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.211924 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bsmx\" (UniqueName: \"kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.212165 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.213046 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.213106 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.213136 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.213198 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1\") pod \"c5c443b7-778f-46ba-9ec4-312767ec3a27\" (UID: \"c5c443b7-778f-46ba-9ec4-312767ec3a27\") " Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.217518 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx" (OuterVolumeSpecName: "kube-api-access-4bsmx") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "kube-api-access-4bsmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.218182 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.241739 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.244961 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.245763 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.247395 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory" (OuterVolumeSpecName: "inventory") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.248584 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.254015 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.257276 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "c5c443b7-778f-46ba-9ec4-312767ec3a27" (UID: "c5c443b7-778f-46ba-9ec4-312767ec3a27"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.315965 4878 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316005 4878 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316014 4878 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316024 4878 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316034 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bsmx\" (UniqueName: \"kubernetes.io/projected/c5c443b7-778f-46ba-9ec4-312767ec3a27-kube-api-access-4bsmx\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316042 4878 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316052 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316064 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.316073 4878 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c443b7-778f-46ba-9ec4-312767ec3a27-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.621051 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" event={"ID":"c5c443b7-778f-46ba-9ec4-312767ec3a27","Type":"ContainerDied","Data":"d2182a61c032c9d1405e45f724afe709aee41a0dc369f654f24f6381d396cb1a"} Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.621083 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-p9k2q" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.621241 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2182a61c032c9d1405e45f724afe709aee41a0dc369f654f24f6381d396cb1a" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.732055 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk"] Dec 04 16:21:23 crc kubenswrapper[4878]: E1204 16:21:23.732803 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c443b7-778f-46ba-9ec4-312767ec3a27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.732819 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c443b7-778f-46ba-9ec4-312767ec3a27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.733059 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c443b7-778f-46ba-9ec4-312767ec3a27" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.733899 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.736118 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.736123 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.736527 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.736745 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-62hbj" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.738965 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.749015 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk"] Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825384 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825508 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825569 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56wc\" (UniqueName: \"kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825592 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825711 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.825749 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927173 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927242 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927292 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927320 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927383 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927439 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56wc\" (UniqueName: \"kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.927470 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.931162 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.932653 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.932653 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.945503 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.954383 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.954628 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:23 crc kubenswrapper[4878]: I1204 16:21:23.969110 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56wc\" (UniqueName: \"kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:24 crc kubenswrapper[4878]: I1204 16:21:24.054175 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:21:24 crc kubenswrapper[4878]: I1204 16:21:24.595172 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk"] Dec 04 16:21:24 crc kubenswrapper[4878]: I1204 16:21:24.596988 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:21:24 crc kubenswrapper[4878]: I1204 16:21:24.636558 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" event={"ID":"5c550a94-c515-45cc-9c92-d7b9043486ef","Type":"ContainerStarted","Data":"ccd0c5e178841a3e1f4b7a77b0317fb7372f79912a25490f9e455c38a43c11a9"} Dec 04 16:21:25 crc kubenswrapper[4878]: I1204 16:21:25.662249 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" event={"ID":"5c550a94-c515-45cc-9c92-d7b9043486ef","Type":"ContainerStarted","Data":"3f5851295ca9d0568f067a394ead0e6ba3946259199a45bcc234b8f81a6f02c0"} Dec 04 16:23:30 crc kubenswrapper[4878]: I1204 16:23:30.840999 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:23:30 crc kubenswrapper[4878]: I1204 16:23:30.841550 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:23:53 crc kubenswrapper[4878]: I1204 16:23:53.034251 4878 generic.go:334] "Generic (PLEG): container finished" podID="5c550a94-c515-45cc-9c92-d7b9043486ef" containerID="3f5851295ca9d0568f067a394ead0e6ba3946259199a45bcc234b8f81a6f02c0" exitCode=0 Dec 04 16:23:53 crc kubenswrapper[4878]: I1204 16:23:53.034323 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" event={"ID":"5c550a94-c515-45cc-9c92-d7b9043486ef","Type":"ContainerDied","Data":"3f5851295ca9d0568f067a394ead0e6ba3946259199a45bcc234b8f81a6f02c0"} Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.453954 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605234 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605348 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605390 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605537 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56wc\" (UniqueName: \"kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605576 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605628 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.605672 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key\") pod \"5c550a94-c515-45cc-9c92-d7b9043486ef\" (UID: \"5c550a94-c515-45cc-9c92-d7b9043486ef\") " Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.612264 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.613079 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc" (OuterVolumeSpecName: "kube-api-access-k56wc") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "kube-api-access-k56wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.634955 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.637042 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.641337 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory" (OuterVolumeSpecName: "inventory") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.642129 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.642289 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5c550a94-c515-45cc-9c92-d7b9043486ef" (UID: "5c550a94-c515-45cc-9c92-d7b9043486ef"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708340 4878 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708394 4878 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708405 4878 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708416 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56wc\" (UniqueName: \"kubernetes.io/projected/5c550a94-c515-45cc-9c92-d7b9043486ef-kube-api-access-k56wc\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708427 4878 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-inventory\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708436 4878 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:54 crc kubenswrapper[4878]: I1204 16:23:54.708446 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5c550a94-c515-45cc-9c92-d7b9043486ef-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:23:55 crc kubenswrapper[4878]: I1204 16:23:55.056223 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" event={"ID":"5c550a94-c515-45cc-9c92-d7b9043486ef","Type":"ContainerDied","Data":"ccd0c5e178841a3e1f4b7a77b0317fb7372f79912a25490f9e455c38a43c11a9"} Dec 04 16:23:55 crc kubenswrapper[4878]: I1204 16:23:55.056822 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd0c5e178841a3e1f4b7a77b0317fb7372f79912a25490f9e455c38a43c11a9" Dec 04 16:23:55 crc kubenswrapper[4878]: I1204 16:23:55.056293 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk" Dec 04 16:24:00 crc kubenswrapper[4878]: I1204 16:24:00.840664 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:24:00 crc kubenswrapper[4878]: I1204 16:24:00.841260 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:24:04 crc kubenswrapper[4878]: I1204 16:24:04.970680 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:04 crc kubenswrapper[4878]: E1204 16:24:04.971584 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c550a94-c515-45cc-9c92-d7b9043486ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 16:24:04 crc kubenswrapper[4878]: I1204 16:24:04.971605 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c550a94-c515-45cc-9c92-d7b9043486ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 16:24:04 crc kubenswrapper[4878]: I1204 16:24:04.971787 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c550a94-c515-45cc-9c92-d7b9043486ef" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 04 16:24:04 crc kubenswrapper[4878]: I1204 16:24:04.973419 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:04 crc kubenswrapper[4878]: I1204 16:24:04.980078 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.033273 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqsq\" (UniqueName: \"kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.033412 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.033456 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.135848 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.136224 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.136464 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqsq\" (UniqueName: \"kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.137336 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.137510 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.157836 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqsq\" (UniqueName: \"kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq\") pod \"redhat-operators-qkm62\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.304734 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:05 crc kubenswrapper[4878]: I1204 16:24:05.836670 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:06 crc kubenswrapper[4878]: I1204 16:24:06.166571 4878 generic.go:334] "Generic (PLEG): container finished" podID="aecdd56a-e9da-4688-871c-2888daf0719d" containerID="96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8" exitCode=0 Dec 04 16:24:06 crc kubenswrapper[4878]: I1204 16:24:06.166660 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerDied","Data":"96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8"} Dec 04 16:24:06 crc kubenswrapper[4878]: I1204 16:24:06.166955 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerStarted","Data":"49dd6df3b312d1a839c77629dd3817d39bc6e2d73fecabfd5ac3da8fcbe00f8b"} Dec 04 16:24:07 crc kubenswrapper[4878]: I1204 16:24:07.178574 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerStarted","Data":"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b"} Dec 04 16:24:10 crc kubenswrapper[4878]: I1204 16:24:10.207613 4878 generic.go:334] "Generic (PLEG): container finished" podID="aecdd56a-e9da-4688-871c-2888daf0719d" containerID="077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b" exitCode=0 Dec 04 16:24:10 crc kubenswrapper[4878]: I1204 16:24:10.207683 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerDied","Data":"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b"} Dec 04 16:24:12 crc kubenswrapper[4878]: I1204 16:24:12.228595 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerStarted","Data":"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90"} Dec 04 16:24:12 crc kubenswrapper[4878]: I1204 16:24:12.254261 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qkm62" podStartSLOduration=2.869515852 podStartE2EDuration="8.254232751s" podCreationTimestamp="2025-12-04 16:24:04 +0000 UTC" firstStartedPulling="2025-12-04 16:24:06.168663271 +0000 UTC m=+2890.131200227" lastFinishedPulling="2025-12-04 16:24:11.55338017 +0000 UTC m=+2895.515917126" observedRunningTime="2025-12-04 16:24:12.245656587 +0000 UTC m=+2896.208193543" watchObservedRunningTime="2025-12-04 16:24:12.254232751 +0000 UTC m=+2896.216769707" Dec 04 16:24:15 crc kubenswrapper[4878]: I1204 16:24:15.305518 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:15 crc kubenswrapper[4878]: I1204 16:24:15.306112 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:16 crc kubenswrapper[4878]: I1204 16:24:16.358681 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qkm62" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="registry-server" probeResult="failure" output=< Dec 04 16:24:16 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:24:16 crc kubenswrapper[4878]: > Dec 04 16:24:25 crc kubenswrapper[4878]: I1204 16:24:25.357260 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:25 crc kubenswrapper[4878]: I1204 16:24:25.423851 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:25 crc kubenswrapper[4878]: I1204 16:24:25.599080 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.363598 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qkm62" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="registry-server" containerID="cri-o://637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90" gracePeriod=2 Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.816245 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.922974 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities\") pod \"aecdd56a-e9da-4688-871c-2888daf0719d\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.923039 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content\") pod \"aecdd56a-e9da-4688-871c-2888daf0719d\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.923298 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tqsq\" (UniqueName: \"kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq\") pod \"aecdd56a-e9da-4688-871c-2888daf0719d\" (UID: \"aecdd56a-e9da-4688-871c-2888daf0719d\") " Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.923904 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities" (OuterVolumeSpecName: "utilities") pod "aecdd56a-e9da-4688-871c-2888daf0719d" (UID: "aecdd56a-e9da-4688-871c-2888daf0719d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:24:27 crc kubenswrapper[4878]: I1204 16:24:27.928753 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq" (OuterVolumeSpecName: "kube-api-access-8tqsq") pod "aecdd56a-e9da-4688-871c-2888daf0719d" (UID: "aecdd56a-e9da-4688-871c-2888daf0719d"). InnerVolumeSpecName "kube-api-access-8tqsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.025900 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tqsq\" (UniqueName: \"kubernetes.io/projected/aecdd56a-e9da-4688-871c-2888daf0719d-kube-api-access-8tqsq\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.025948 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.038317 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aecdd56a-e9da-4688-871c-2888daf0719d" (UID: "aecdd56a-e9da-4688-871c-2888daf0719d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.128056 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aecdd56a-e9da-4688-871c-2888daf0719d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.375225 4878 generic.go:334] "Generic (PLEG): container finished" podID="aecdd56a-e9da-4688-871c-2888daf0719d" containerID="637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90" exitCode=0 Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.375317 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qkm62" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.375338 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerDied","Data":"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90"} Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.376099 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qkm62" event={"ID":"aecdd56a-e9da-4688-871c-2888daf0719d","Type":"ContainerDied","Data":"49dd6df3b312d1a839c77629dd3817d39bc6e2d73fecabfd5ac3da8fcbe00f8b"} Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.376139 4878 scope.go:117] "RemoveContainer" containerID="637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.399188 4878 scope.go:117] "RemoveContainer" containerID="077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.426204 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.439408 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qkm62"] Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.449954 4878 scope.go:117] "RemoveContainer" containerID="96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.485913 4878 scope.go:117] "RemoveContainer" containerID="637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90" Dec 04 16:24:28 crc kubenswrapper[4878]: E1204 16:24:28.486426 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90\": container with ID starting with 637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90 not found: ID does not exist" containerID="637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.486464 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90"} err="failed to get container status \"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90\": rpc error: code = NotFound desc = could not find container \"637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90\": container with ID starting with 637cccbc0eb4dd0037511cad19ce34ff3830d9da4431aedd1a234b4e7c630c90 not found: ID does not exist" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.486492 4878 scope.go:117] "RemoveContainer" containerID="077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b" Dec 04 16:24:28 crc kubenswrapper[4878]: E1204 16:24:28.486809 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b\": container with ID starting with 077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b not found: ID does not exist" containerID="077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.486847 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b"} err="failed to get container status \"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b\": rpc error: code = NotFound desc = could not find container \"077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b\": container with ID starting with 077efa56f388d088cfdb8f70a2ac67a86d581f1a4429e20a1b6ef52483d4429b not found: ID does not exist" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.486894 4878 scope.go:117] "RemoveContainer" containerID="96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8" Dec 04 16:24:28 crc kubenswrapper[4878]: E1204 16:24:28.487371 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8\": container with ID starting with 96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8 not found: ID does not exist" containerID="96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8" Dec 04 16:24:28 crc kubenswrapper[4878]: I1204 16:24:28.487417 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8"} err="failed to get container status \"96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8\": rpc error: code = NotFound desc = could not find container \"96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8\": container with ID starting with 96440311789d184afe1b31ced23db287348309263e398c5ed87ca31b2d5c71c8 not found: ID does not exist" Dec 04 16:24:29 crc kubenswrapper[4878]: I1204 16:24:29.190883 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" path="/var/lib/kubelet/pods/aecdd56a-e9da-4688-871c-2888daf0719d/volumes" Dec 04 16:24:30 crc kubenswrapper[4878]: I1204 16:24:30.840373 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:24:30 crc kubenswrapper[4878]: I1204 16:24:30.840683 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:24:30 crc kubenswrapper[4878]: I1204 16:24:30.840735 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:24:30 crc kubenswrapper[4878]: I1204 16:24:30.841565 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:24:30 crc kubenswrapper[4878]: I1204 16:24:30.841618 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44" gracePeriod=600 Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.013970 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:31 crc kubenswrapper[4878]: E1204 16:24:31.014538 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="registry-server" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.014563 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="registry-server" Dec 04 16:24:31 crc kubenswrapper[4878]: E1204 16:24:31.014607 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="extract-utilities" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.014615 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="extract-utilities" Dec 04 16:24:31 crc kubenswrapper[4878]: E1204 16:24:31.014630 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="extract-content" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.014637 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="extract-content" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.015053 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecdd56a-e9da-4688-871c-2888daf0719d" containerName="registry-server" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.016618 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.050330 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.218855 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7njhh\" (UniqueName: \"kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.219425 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.219551 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.321111 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.321309 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7njhh\" (UniqueName: \"kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.321365 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.321613 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.321952 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.358696 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7njhh\" (UniqueName: \"kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh\") pod \"certified-operators-5k5sc\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.416087 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44" exitCode=0 Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.416146 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44"} Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.416189 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82"} Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.416211 4878 scope.go:117] "RemoveContainer" containerID="78d462073d45cadeaa341c6537c0fd9ba97bd6c6e951ffe85828b9f2ab42298d" Dec 04 16:24:31 crc kubenswrapper[4878]: I1204 16:24:31.651089 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:32 crc kubenswrapper[4878]: I1204 16:24:32.150768 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:32 crc kubenswrapper[4878]: I1204 16:24:32.428568 4878 generic.go:334] "Generic (PLEG): container finished" podID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerID="a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787" exitCode=0 Dec 04 16:24:32 crc kubenswrapper[4878]: I1204 16:24:32.428653 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerDied","Data":"a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787"} Dec 04 16:24:32 crc kubenswrapper[4878]: I1204 16:24:32.428974 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerStarted","Data":"9281007b7a05d1155aa4514b77e91fb622f3fec6959e2c24896df3bd740a4a3d"} Dec 04 16:24:33 crc kubenswrapper[4878]: I1204 16:24:33.449462 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerStarted","Data":"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7"} Dec 04 16:24:34 crc kubenswrapper[4878]: I1204 16:24:34.460666 4878 generic.go:334] "Generic (PLEG): container finished" podID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerID="0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7" exitCode=0 Dec 04 16:24:34 crc kubenswrapper[4878]: I1204 16:24:34.460703 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerDied","Data":"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7"} Dec 04 16:24:35 crc kubenswrapper[4878]: I1204 16:24:35.472728 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerStarted","Data":"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691"} Dec 04 16:24:35 crc kubenswrapper[4878]: I1204 16:24:35.501704 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5k5sc" podStartSLOduration=3.017275253 podStartE2EDuration="5.501677017s" podCreationTimestamp="2025-12-04 16:24:30 +0000 UTC" firstStartedPulling="2025-12-04 16:24:32.43109337 +0000 UTC m=+2916.393630316" lastFinishedPulling="2025-12-04 16:24:34.915495124 +0000 UTC m=+2918.878032080" observedRunningTime="2025-12-04 16:24:35.491238227 +0000 UTC m=+2919.453775193" watchObservedRunningTime="2025-12-04 16:24:35.501677017 +0000 UTC m=+2919.464213983" Dec 04 16:24:41 crc kubenswrapper[4878]: I1204 16:24:41.651838 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:41 crc kubenswrapper[4878]: I1204 16:24:41.652628 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:41 crc kubenswrapper[4878]: I1204 16:24:41.698124 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:42 crc kubenswrapper[4878]: I1204 16:24:42.590412 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:42 crc kubenswrapper[4878]: I1204 16:24:42.636316 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:44 crc kubenswrapper[4878]: I1204 16:24:44.563796 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5k5sc" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="registry-server" containerID="cri-o://34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691" gracePeriod=2 Dec 04 16:24:44 crc kubenswrapper[4878]: E1204 16:24:44.779532 4878 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80160247_519c_4e9e_8e42_ee9cc68a4f07.slice/crio-conmon-34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80160247_519c_4e9e_8e42_ee9cc68a4f07.slice/crio-34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691.scope\": RecentStats: unable to find data in memory cache]" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.010715 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.205056 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities\") pod \"80160247-519c-4e9e-8e42-ee9cc68a4f07\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.205934 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7njhh\" (UniqueName: \"kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh\") pod \"80160247-519c-4e9e-8e42-ee9cc68a4f07\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.206077 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content\") pod \"80160247-519c-4e9e-8e42-ee9cc68a4f07\" (UID: \"80160247-519c-4e9e-8e42-ee9cc68a4f07\") " Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.205940 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities" (OuterVolumeSpecName: "utilities") pod "80160247-519c-4e9e-8e42-ee9cc68a4f07" (UID: "80160247-519c-4e9e-8e42-ee9cc68a4f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.206836 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.212737 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh" (OuterVolumeSpecName: "kube-api-access-7njhh") pod "80160247-519c-4e9e-8e42-ee9cc68a4f07" (UID: "80160247-519c-4e9e-8e42-ee9cc68a4f07"). InnerVolumeSpecName "kube-api-access-7njhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.257930 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80160247-519c-4e9e-8e42-ee9cc68a4f07" (UID: "80160247-519c-4e9e-8e42-ee9cc68a4f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.309039 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7njhh\" (UniqueName: \"kubernetes.io/projected/80160247-519c-4e9e-8e42-ee9cc68a4f07-kube-api-access-7njhh\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.309090 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80160247-519c-4e9e-8e42-ee9cc68a4f07-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.575347 4878 generic.go:334] "Generic (PLEG): container finished" podID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerID="34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691" exitCode=0 Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.575396 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerDied","Data":"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691"} Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.575417 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5k5sc" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.575433 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5k5sc" event={"ID":"80160247-519c-4e9e-8e42-ee9cc68a4f07","Type":"ContainerDied","Data":"9281007b7a05d1155aa4514b77e91fb622f3fec6959e2c24896df3bd740a4a3d"} Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.575451 4878 scope.go:117] "RemoveContainer" containerID="34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.598363 4878 scope.go:117] "RemoveContainer" containerID="0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.631668 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.631917 4878 scope.go:117] "RemoveContainer" containerID="a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.641765 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5k5sc"] Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.680456 4878 scope.go:117] "RemoveContainer" containerID="34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691" Dec 04 16:24:45 crc kubenswrapper[4878]: E1204 16:24:45.681156 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691\": container with ID starting with 34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691 not found: ID does not exist" containerID="34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.681218 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691"} err="failed to get container status \"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691\": rpc error: code = NotFound desc = could not find container \"34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691\": container with ID starting with 34db8a089c45bb07bf8ec656ce7c40f9f12e164f0ee7d2c42f88991f550b1691 not found: ID does not exist" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.681265 4878 scope.go:117] "RemoveContainer" containerID="0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7" Dec 04 16:24:45 crc kubenswrapper[4878]: E1204 16:24:45.681765 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7\": container with ID starting with 0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7 not found: ID does not exist" containerID="0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.681810 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7"} err="failed to get container status \"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7\": rpc error: code = NotFound desc = could not find container \"0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7\": container with ID starting with 0efd25d944a5ea510166ff6ba7b86b1ed40a874d5552af2e8fb9c463289847e7 not found: ID does not exist" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.681845 4878 scope.go:117] "RemoveContainer" containerID="a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787" Dec 04 16:24:45 crc kubenswrapper[4878]: E1204 16:24:45.682665 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787\": container with ID starting with a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787 not found: ID does not exist" containerID="a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787" Dec 04 16:24:45 crc kubenswrapper[4878]: I1204 16:24:45.682710 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787"} err="failed to get container status \"a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787\": rpc error: code = NotFound desc = could not find container \"a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787\": container with ID starting with a2f3b3cefbfdd4d2e1a762cfe1dbf2512bdcdacae7d5065ae6c8485533e50787 not found: ID does not exist" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.192901 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" path="/var/lib/kubelet/pods/80160247-519c-4e9e-8e42-ee9cc68a4f07/volumes" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.346788 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:24:47 crc kubenswrapper[4878]: E1204 16:24:47.347363 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="extract-content" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.347381 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="extract-content" Dec 04 16:24:47 crc kubenswrapper[4878]: E1204 16:24:47.347396 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="extract-utilities" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.347404 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="extract-utilities" Dec 04 16:24:47 crc kubenswrapper[4878]: E1204 16:24:47.347441 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="registry-server" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.347448 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="registry-server" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.347669 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="80160247-519c-4e9e-8e42-ee9cc68a4f07" containerName="registry-server" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.349424 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.351283 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.351338 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.351379 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxl7k\" (UniqueName: \"kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.365367 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.453323 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxl7k\" (UniqueName: \"kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.453660 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.453717 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.454309 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.454343 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.471896 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxl7k\" (UniqueName: \"kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k\") pod \"community-operators-ldjr4\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:47 crc kubenswrapper[4878]: I1204 16:24:47.675198 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:48 crc kubenswrapper[4878]: I1204 16:24:48.196394 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:24:48 crc kubenswrapper[4878]: I1204 16:24:48.609968 4878 generic.go:334] "Generic (PLEG): container finished" podID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerID="537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42" exitCode=0 Dec 04 16:24:48 crc kubenswrapper[4878]: I1204 16:24:48.610156 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerDied","Data":"537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42"} Dec 04 16:24:48 crc kubenswrapper[4878]: I1204 16:24:48.610581 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerStarted","Data":"bba06aadb27d52fb29e8ae6b89593ce01a402e90a5d22e30a34357a25eb5336e"} Dec 04 16:24:49 crc kubenswrapper[4878]: I1204 16:24:49.620342 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerStarted","Data":"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8"} Dec 04 16:24:50 crc kubenswrapper[4878]: I1204 16:24:50.632992 4878 generic.go:334] "Generic (PLEG): container finished" podID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerID="da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8" exitCode=0 Dec 04 16:24:50 crc kubenswrapper[4878]: I1204 16:24:50.633086 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerDied","Data":"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8"} Dec 04 16:24:51 crc kubenswrapper[4878]: I1204 16:24:51.644554 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerStarted","Data":"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7"} Dec 04 16:24:51 crc kubenswrapper[4878]: I1204 16:24:51.665215 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ldjr4" podStartSLOduration=2.21452007 podStartE2EDuration="4.665194235s" podCreationTimestamp="2025-12-04 16:24:47 +0000 UTC" firstStartedPulling="2025-12-04 16:24:48.611858026 +0000 UTC m=+2932.574394982" lastFinishedPulling="2025-12-04 16:24:51.062532191 +0000 UTC m=+2935.025069147" observedRunningTime="2025-12-04 16:24:51.660360445 +0000 UTC m=+2935.622897401" watchObservedRunningTime="2025-12-04 16:24:51.665194235 +0000 UTC m=+2935.627731191" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.295352 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.298309 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.302359 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.302696 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tgvf9" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.302989 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.303522 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.306202 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.453521 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5tvt\" (UniqueName: \"kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.453610 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.453649 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.453697 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.453995 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.454260 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.454306 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.454420 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.454481 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556674 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556782 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556817 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556883 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556937 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.556998 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5tvt\" (UniqueName: \"kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557032 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557065 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557111 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557460 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557647 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.557823 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.558392 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.559073 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.566918 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.567047 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.567621 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.577758 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5tvt\" (UniqueName: \"kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.596758 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"tempest-tests-tempest\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " pod="openstack/tempest-tests-tempest" Dec 04 16:24:52 crc kubenswrapper[4878]: I1204 16:24:52.630399 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 16:24:53 crc kubenswrapper[4878]: I1204 16:24:53.083929 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 04 16:24:53 crc kubenswrapper[4878]: W1204 16:24:53.089380 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e394916_5de1_45b1_9e49_246be63a5689.slice/crio-a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4 WatchSource:0}: Error finding container a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4: Status 404 returned error can't find the container with id a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4 Dec 04 16:24:53 crc kubenswrapper[4878]: I1204 16:24:53.673091 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e394916-5de1-45b1-9e49-246be63a5689","Type":"ContainerStarted","Data":"a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4"} Dec 04 16:24:57 crc kubenswrapper[4878]: I1204 16:24:57.675419 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:57 crc kubenswrapper[4878]: I1204 16:24:57.676017 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:57 crc kubenswrapper[4878]: I1204 16:24:57.729058 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:57 crc kubenswrapper[4878]: I1204 16:24:57.784713 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:24:57 crc kubenswrapper[4878]: I1204 16:24:57.966742 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:24:59 crc kubenswrapper[4878]: I1204 16:24:59.734035 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ldjr4" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="registry-server" containerID="cri-o://5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7" gracePeriod=2 Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.529423 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.631756 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxl7k\" (UniqueName: \"kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k\") pod \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.632037 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content\") pod \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.632089 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities\") pod \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\" (UID: \"bba3fcb5-6725-47ef-a43d-bf78d64af13c\") " Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.633045 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities" (OuterVolumeSpecName: "utilities") pod "bba3fcb5-6725-47ef-a43d-bf78d64af13c" (UID: "bba3fcb5-6725-47ef-a43d-bf78d64af13c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.639242 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k" (OuterVolumeSpecName: "kube-api-access-pxl7k") pod "bba3fcb5-6725-47ef-a43d-bf78d64af13c" (UID: "bba3fcb5-6725-47ef-a43d-bf78d64af13c"). InnerVolumeSpecName "kube-api-access-pxl7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.691461 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bba3fcb5-6725-47ef-a43d-bf78d64af13c" (UID: "bba3fcb5-6725-47ef-a43d-bf78d64af13c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.735238 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.735278 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba3fcb5-6725-47ef-a43d-bf78d64af13c-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.735288 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxl7k\" (UniqueName: \"kubernetes.io/projected/bba3fcb5-6725-47ef-a43d-bf78d64af13c-kube-api-access-pxl7k\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.750289 4878 generic.go:334] "Generic (PLEG): container finished" podID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerID="5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7" exitCode=0 Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.750376 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldjr4" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.750360 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerDied","Data":"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7"} Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.751151 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldjr4" event={"ID":"bba3fcb5-6725-47ef-a43d-bf78d64af13c","Type":"ContainerDied","Data":"bba06aadb27d52fb29e8ae6b89593ce01a402e90a5d22e30a34357a25eb5336e"} Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.751184 4878 scope.go:117] "RemoveContainer" containerID="5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.788024 4878 scope.go:117] "RemoveContainer" containerID="da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.797588 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.809727 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ldjr4"] Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.818466 4878 scope.go:117] "RemoveContainer" containerID="537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.864564 4878 scope.go:117] "RemoveContainer" containerID="5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7" Dec 04 16:25:00 crc kubenswrapper[4878]: E1204 16:25:00.865381 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7\": container with ID starting with 5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7 not found: ID does not exist" containerID="5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.865445 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7"} err="failed to get container status \"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7\": rpc error: code = NotFound desc = could not find container \"5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7\": container with ID starting with 5be201a35c692ff1a58967c64d42c7e86738111d47c67cb117ffec9ef5cab3f7 not found: ID does not exist" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.865475 4878 scope.go:117] "RemoveContainer" containerID="da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8" Dec 04 16:25:00 crc kubenswrapper[4878]: E1204 16:25:00.865890 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8\": container with ID starting with da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8 not found: ID does not exist" containerID="da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.865990 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8"} err="failed to get container status \"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8\": rpc error: code = NotFound desc = could not find container \"da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8\": container with ID starting with da22fb014c31136a3628636c47786769e950ebd80f4f6f7ab202fb0ed0a2e4a8 not found: ID does not exist" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.866029 4878 scope.go:117] "RemoveContainer" containerID="537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42" Dec 04 16:25:00 crc kubenswrapper[4878]: E1204 16:25:00.867025 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42\": container with ID starting with 537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42 not found: ID does not exist" containerID="537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42" Dec 04 16:25:00 crc kubenswrapper[4878]: I1204 16:25:00.867052 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42"} err="failed to get container status \"537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42\": rpc error: code = NotFound desc = could not find container \"537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42\": container with ID starting with 537f6f83ddb49c50596e804f8fa245c918d963d0fccd57cd53e1d2f49c220e42 not found: ID does not exist" Dec 04 16:25:01 crc kubenswrapper[4878]: I1204 16:25:01.209389 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" path="/var/lib/kubelet/pods/bba3fcb5-6725-47ef-a43d-bf78d64af13c/volumes" Dec 04 16:25:28 crc kubenswrapper[4878]: E1204 16:25:28.933820 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 04 16:25:28 crc kubenswrapper[4878]: E1204 16:25:28.934961 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s5tvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(3e394916-5de1-45b1-9e49-246be63a5689): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 04 16:25:28 crc kubenswrapper[4878]: E1204 16:25:28.936328 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="3e394916-5de1-45b1-9e49-246be63a5689" Dec 04 16:25:29 crc kubenswrapper[4878]: E1204 16:25:29.035614 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="3e394916-5de1-45b1-9e49-246be63a5689" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.831314 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:40 crc kubenswrapper[4878]: E1204 16:25:40.832646 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="extract-content" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.832665 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="extract-content" Dec 04 16:25:40 crc kubenswrapper[4878]: E1204 16:25:40.832709 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="registry-server" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.832718 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="registry-server" Dec 04 16:25:40 crc kubenswrapper[4878]: E1204 16:25:40.832746 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="extract-utilities" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.832754 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="extract-utilities" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.833228 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba3fcb5-6725-47ef-a43d-bf78d64af13c" containerName="registry-server" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.835170 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:40 crc kubenswrapper[4878]: I1204 16:25:40.846077 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.011021 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.011387 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8rmv\" (UniqueName: \"kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.011424 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.113460 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8rmv\" (UniqueName: \"kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.113529 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.113692 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.114198 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.114217 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.143017 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8rmv\" (UniqueName: \"kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv\") pod \"redhat-marketplace-jx8dv\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.161064 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:41 crc kubenswrapper[4878]: I1204 16:25:41.704140 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:42 crc kubenswrapper[4878]: I1204 16:25:42.178472 4878 generic.go:334] "Generic (PLEG): container finished" podID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerID="1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7" exitCode=0 Dec 04 16:25:42 crc kubenswrapper[4878]: I1204 16:25:42.178603 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerDied","Data":"1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7"} Dec 04 16:25:42 crc kubenswrapper[4878]: I1204 16:25:42.178934 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerStarted","Data":"b3696601bfd23131183be71955509de346a0f24e2688b8d5cf15312fa7788df2"} Dec 04 16:25:42 crc kubenswrapper[4878]: I1204 16:25:42.644233 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 04 16:25:44 crc kubenswrapper[4878]: I1204 16:25:44.242904 4878 generic.go:334] "Generic (PLEG): container finished" podID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerID="4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288" exitCode=0 Dec 04 16:25:44 crc kubenswrapper[4878]: I1204 16:25:44.243090 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerDied","Data":"4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288"} Dec 04 16:25:44 crc kubenswrapper[4878]: I1204 16:25:44.247358 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e394916-5de1-45b1-9e49-246be63a5689","Type":"ContainerStarted","Data":"56eaf0eaaf96e186e63d370885ed5525bb00ebb20f40d6d909feebf6382056c1"} Dec 04 16:25:44 crc kubenswrapper[4878]: I1204 16:25:44.290193 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.7404316 podStartE2EDuration="53.290169947s" podCreationTimestamp="2025-12-04 16:24:51 +0000 UTC" firstStartedPulling="2025-12-04 16:24:53.091986215 +0000 UTC m=+2937.054523171" lastFinishedPulling="2025-12-04 16:25:42.641724562 +0000 UTC m=+2986.604261518" observedRunningTime="2025-12-04 16:25:44.288410593 +0000 UTC m=+2988.250947569" watchObservedRunningTime="2025-12-04 16:25:44.290169947 +0000 UTC m=+2988.252706903" Dec 04 16:25:45 crc kubenswrapper[4878]: I1204 16:25:45.273055 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerStarted","Data":"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d"} Dec 04 16:25:45 crc kubenswrapper[4878]: I1204 16:25:45.308346 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jx8dv" podStartSLOduration=2.797903184 podStartE2EDuration="5.308324028s" podCreationTimestamp="2025-12-04 16:25:40 +0000 UTC" firstStartedPulling="2025-12-04 16:25:42.180141995 +0000 UTC m=+2986.142678951" lastFinishedPulling="2025-12-04 16:25:44.690562839 +0000 UTC m=+2988.653099795" observedRunningTime="2025-12-04 16:25:45.300448072 +0000 UTC m=+2989.262985038" watchObservedRunningTime="2025-12-04 16:25:45.308324028 +0000 UTC m=+2989.270860984" Dec 04 16:25:51 crc kubenswrapper[4878]: I1204 16:25:51.162736 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:51 crc kubenswrapper[4878]: I1204 16:25:51.163295 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:51 crc kubenswrapper[4878]: I1204 16:25:51.228767 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:51 crc kubenswrapper[4878]: I1204 16:25:51.378710 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:51 crc kubenswrapper[4878]: I1204 16:25:51.468492 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.351159 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jx8dv" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="registry-server" containerID="cri-o://af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d" gracePeriod=2 Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.828331 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.895344 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content\") pod \"52743e4e-faf9-4427-9a99-485a1c9f33bd\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.895403 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities\") pod \"52743e4e-faf9-4427-9a99-485a1c9f33bd\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.895670 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8rmv\" (UniqueName: \"kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv\") pod \"52743e4e-faf9-4427-9a99-485a1c9f33bd\" (UID: \"52743e4e-faf9-4427-9a99-485a1c9f33bd\") " Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.896483 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities" (OuterVolumeSpecName: "utilities") pod "52743e4e-faf9-4427-9a99-485a1c9f33bd" (UID: "52743e4e-faf9-4427-9a99-485a1c9f33bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.901634 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv" (OuterVolumeSpecName: "kube-api-access-g8rmv") pod "52743e4e-faf9-4427-9a99-485a1c9f33bd" (UID: "52743e4e-faf9-4427-9a99-485a1c9f33bd"). InnerVolumeSpecName "kube-api-access-g8rmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.920056 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52743e4e-faf9-4427-9a99-485a1c9f33bd" (UID: "52743e4e-faf9-4427-9a99-485a1c9f33bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.998663 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8rmv\" (UniqueName: \"kubernetes.io/projected/52743e4e-faf9-4427-9a99-485a1c9f33bd-kube-api-access-g8rmv\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.998704 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:53 crc kubenswrapper[4878]: I1204 16:25:53.998717 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52743e4e-faf9-4427-9a99-485a1c9f33bd-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.365198 4878 generic.go:334] "Generic (PLEG): container finished" podID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerID="af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d" exitCode=0 Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.365257 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerDied","Data":"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d"} Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.365300 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jx8dv" event={"ID":"52743e4e-faf9-4427-9a99-485a1c9f33bd","Type":"ContainerDied","Data":"b3696601bfd23131183be71955509de346a0f24e2688b8d5cf15312fa7788df2"} Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.365308 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jx8dv" Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.365324 4878 scope.go:117] "RemoveContainer" containerID="af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d" Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.402430 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:54 crc kubenswrapper[4878]: I1204 16:25:54.414375 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jx8dv"] Dec 04 16:25:55 crc kubenswrapper[4878]: I1204 16:25:55.192720 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" path="/var/lib/kubelet/pods/52743e4e-faf9-4427-9a99-485a1c9f33bd/volumes" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.200834 4878 scope.go:117] "RemoveContainer" containerID="4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.228814 4878 scope.go:117] "RemoveContainer" containerID="1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.276718 4878 scope.go:117] "RemoveContainer" containerID="af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d" Dec 04 16:25:56 crc kubenswrapper[4878]: E1204 16:25:56.277273 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d\": container with ID starting with af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d not found: ID does not exist" containerID="af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.277319 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d"} err="failed to get container status \"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d\": rpc error: code = NotFound desc = could not find container \"af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d\": container with ID starting with af17e05200ac9800c37ecbdfe7a14b9dca73bcd61cd32a609a592d6e8a1c7b5d not found: ID does not exist" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.277347 4878 scope.go:117] "RemoveContainer" containerID="4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288" Dec 04 16:25:56 crc kubenswrapper[4878]: E1204 16:25:56.277702 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288\": container with ID starting with 4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288 not found: ID does not exist" containerID="4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.277742 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288"} err="failed to get container status \"4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288\": rpc error: code = NotFound desc = could not find container \"4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288\": container with ID starting with 4a3ddd4bc0d58033955f71159727fc5e4b09e2cd790f93f9c8bfd5f184c91288 not found: ID does not exist" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.277768 4878 scope.go:117] "RemoveContainer" containerID="1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7" Dec 04 16:25:56 crc kubenswrapper[4878]: E1204 16:25:56.278109 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7\": container with ID starting with 1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7 not found: ID does not exist" containerID="1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7" Dec 04 16:25:56 crc kubenswrapper[4878]: I1204 16:25:56.278133 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7"} err="failed to get container status \"1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7\": rpc error: code = NotFound desc = could not find container \"1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7\": container with ID starting with 1f1850b2d067f1a160bba78cfb78f8a397ce54474db63c1e912f58a1fc527ce7 not found: ID does not exist" Dec 04 16:27:00 crc kubenswrapper[4878]: I1204 16:27:00.840424 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:27:00 crc kubenswrapper[4878]: I1204 16:27:00.840992 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:27:30 crc kubenswrapper[4878]: I1204 16:27:30.840727 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:27:30 crc kubenswrapper[4878]: I1204 16:27:30.841323 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:28:00 crc kubenswrapper[4878]: I1204 16:28:00.840241 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:28:00 crc kubenswrapper[4878]: I1204 16:28:00.840806 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:28:00 crc kubenswrapper[4878]: I1204 16:28:00.840866 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:28:00 crc kubenswrapper[4878]: I1204 16:28:00.841787 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:28:00 crc kubenswrapper[4878]: I1204 16:28:00.841842 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" gracePeriod=600 Dec 04 16:28:00 crc kubenswrapper[4878]: E1204 16:28:00.962122 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:28:01 crc kubenswrapper[4878]: I1204 16:28:01.630254 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" exitCode=0 Dec 04 16:28:01 crc kubenswrapper[4878]: I1204 16:28:01.630307 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82"} Dec 04 16:28:01 crc kubenswrapper[4878]: I1204 16:28:01.630366 4878 scope.go:117] "RemoveContainer" containerID="a810ee06498f47a45c18518936b5a79f3af3fd2817743b65ecfd8a269411bb44" Dec 04 16:28:01 crc kubenswrapper[4878]: I1204 16:28:01.631396 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:28:01 crc kubenswrapper[4878]: E1204 16:28:01.631832 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:28:14 crc kubenswrapper[4878]: I1204 16:28:14.180985 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:28:14 crc kubenswrapper[4878]: E1204 16:28:14.181626 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:28:28 crc kubenswrapper[4878]: I1204 16:28:28.180328 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:28:28 crc kubenswrapper[4878]: E1204 16:28:28.181024 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:28:39 crc kubenswrapper[4878]: I1204 16:28:39.181263 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:28:39 crc kubenswrapper[4878]: E1204 16:28:39.182055 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:28:53 crc kubenswrapper[4878]: I1204 16:28:53.180624 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:28:53 crc kubenswrapper[4878]: E1204 16:28:53.181806 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:29:07 crc kubenswrapper[4878]: I1204 16:29:07.190260 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:29:07 crc kubenswrapper[4878]: E1204 16:29:07.191726 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:29:21 crc kubenswrapper[4878]: I1204 16:29:21.179722 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:29:21 crc kubenswrapper[4878]: E1204 16:29:21.180606 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:29:33 crc kubenswrapper[4878]: I1204 16:29:33.179777 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:29:33 crc kubenswrapper[4878]: E1204 16:29:33.180701 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:29:46 crc kubenswrapper[4878]: I1204 16:29:46.179672 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:29:46 crc kubenswrapper[4878]: E1204 16:29:46.180782 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.146026 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp"] Dec 04 16:30:00 crc kubenswrapper[4878]: E1204 16:30:00.147038 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="extract-utilities" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.147055 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="extract-utilities" Dec 04 16:30:00 crc kubenswrapper[4878]: E1204 16:30:00.147067 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="extract-content" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.147077 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="extract-content" Dec 04 16:30:00 crc kubenswrapper[4878]: E1204 16:30:00.147117 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="registry-server" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.147126 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="registry-server" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.147345 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="52743e4e-faf9-4427-9a99-485a1c9f33bd" containerName="registry-server" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.148201 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.154198 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.154471 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.167171 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp"] Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.184099 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.184179 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtftd\" (UniqueName: \"kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.184209 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.286366 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.286450 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtftd\" (UniqueName: \"kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.286496 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.287731 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.301976 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.311541 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtftd\" (UniqueName: \"kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd\") pod \"collect-profiles-29414430-kttjp\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.486214 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:00 crc kubenswrapper[4878]: I1204 16:30:00.989387 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp"] Dec 04 16:30:01 crc kubenswrapper[4878]: I1204 16:30:01.180492 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:30:01 crc kubenswrapper[4878]: E1204 16:30:01.180972 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:30:01 crc kubenswrapper[4878]: I1204 16:30:01.798508 4878 generic.go:334] "Generic (PLEG): container finished" podID="4967d23c-3a62-4dcb-af41-48bb1c278e76" containerID="e97b5faf015c06f10aa000ccbcedfdf9563dd6bb7554686b3261a7eb51722369" exitCode=0 Dec 04 16:30:01 crc kubenswrapper[4878]: I1204 16:30:01.798616 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" event={"ID":"4967d23c-3a62-4dcb-af41-48bb1c278e76","Type":"ContainerDied","Data":"e97b5faf015c06f10aa000ccbcedfdf9563dd6bb7554686b3261a7eb51722369"} Dec 04 16:30:01 crc kubenswrapper[4878]: I1204 16:30:01.799284 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" event={"ID":"4967d23c-3a62-4dcb-af41-48bb1c278e76","Type":"ContainerStarted","Data":"cd77564a0d706ecdc1ae5ce96f5ff6641dbcb15bf1db14ea7f2fc8d390d6ebe8"} Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.200190 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.264357 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtftd\" (UniqueName: \"kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd\") pod \"4967d23c-3a62-4dcb-af41-48bb1c278e76\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.264601 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume\") pod \"4967d23c-3a62-4dcb-af41-48bb1c278e76\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.264632 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume\") pod \"4967d23c-3a62-4dcb-af41-48bb1c278e76\" (UID: \"4967d23c-3a62-4dcb-af41-48bb1c278e76\") " Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.265854 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume" (OuterVolumeSpecName: "config-volume") pod "4967d23c-3a62-4dcb-af41-48bb1c278e76" (UID: "4967d23c-3a62-4dcb-af41-48bb1c278e76"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.271428 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4967d23c-3a62-4dcb-af41-48bb1c278e76" (UID: "4967d23c-3a62-4dcb-af41-48bb1c278e76"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.273299 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd" (OuterVolumeSpecName: "kube-api-access-jtftd") pod "4967d23c-3a62-4dcb-af41-48bb1c278e76" (UID: "4967d23c-3a62-4dcb-af41-48bb1c278e76"). InnerVolumeSpecName "kube-api-access-jtftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.367043 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtftd\" (UniqueName: \"kubernetes.io/projected/4967d23c-3a62-4dcb-af41-48bb1c278e76-kube-api-access-jtftd\") on node \"crc\" DevicePath \"\"" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.367090 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4967d23c-3a62-4dcb-af41-48bb1c278e76-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.367106 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4967d23c-3a62-4dcb-af41-48bb1c278e76-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.816064 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" event={"ID":"4967d23c-3a62-4dcb-af41-48bb1c278e76","Type":"ContainerDied","Data":"cd77564a0d706ecdc1ae5ce96f5ff6641dbcb15bf1db14ea7f2fc8d390d6ebe8"} Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.816397 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd77564a0d706ecdc1ae5ce96f5ff6641dbcb15bf1db14ea7f2fc8d390d6ebe8" Dec 04 16:30:03 crc kubenswrapper[4878]: I1204 16:30:03.816107 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414430-kttjp" Dec 04 16:30:04 crc kubenswrapper[4878]: I1204 16:30:04.299723 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9"] Dec 04 16:30:04 crc kubenswrapper[4878]: I1204 16:30:04.310443 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414385-rvfz9"] Dec 04 16:30:05 crc kubenswrapper[4878]: I1204 16:30:05.191821 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2145ebc-f247-4f3a-9843-ad32f23fa61c" path="/var/lib/kubelet/pods/d2145ebc-f247-4f3a-9843-ad32f23fa61c/volumes" Dec 04 16:30:15 crc kubenswrapper[4878]: I1204 16:30:15.180234 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:30:15 crc kubenswrapper[4878]: E1204 16:30:15.180940 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:30:26 crc kubenswrapper[4878]: I1204 16:30:26.179820 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:30:26 crc kubenswrapper[4878]: E1204 16:30:26.180521 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:30:28 crc kubenswrapper[4878]: I1204 16:30:28.997764 4878 scope.go:117] "RemoveContainer" containerID="929051d28420808246af3dc8bd1173fd8ad9da7a4f7b6c6d743021cfe0c0c025" Dec 04 16:30:39 crc kubenswrapper[4878]: I1204 16:30:39.181073 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:30:39 crc kubenswrapper[4878]: E1204 16:30:39.181961 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:30:54 crc kubenswrapper[4878]: I1204 16:30:54.180617 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:30:54 crc kubenswrapper[4878]: E1204 16:30:54.181537 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:31:05 crc kubenswrapper[4878]: I1204 16:31:05.179670 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:31:05 crc kubenswrapper[4878]: E1204 16:31:05.180525 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:31:17 crc kubenswrapper[4878]: I1204 16:31:17.187712 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:31:17 crc kubenswrapper[4878]: E1204 16:31:17.189489 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:31:29 crc kubenswrapper[4878]: I1204 16:31:29.180141 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:31:29 crc kubenswrapper[4878]: E1204 16:31:29.181001 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:31:40 crc kubenswrapper[4878]: I1204 16:31:40.179923 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:31:40 crc kubenswrapper[4878]: E1204 16:31:40.180591 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:31:52 crc kubenswrapper[4878]: I1204 16:31:52.179898 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:31:52 crc kubenswrapper[4878]: E1204 16:31:52.180771 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:32:06 crc kubenswrapper[4878]: I1204 16:32:06.180489 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:32:06 crc kubenswrapper[4878]: E1204 16:32:06.181382 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:32:19 crc kubenswrapper[4878]: I1204 16:32:19.183625 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:32:19 crc kubenswrapper[4878]: E1204 16:32:19.184473 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:32:34 crc kubenswrapper[4878]: I1204 16:32:34.180373 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:32:34 crc kubenswrapper[4878]: E1204 16:32:34.181084 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:32:48 crc kubenswrapper[4878]: I1204 16:32:48.181131 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:32:48 crc kubenswrapper[4878]: E1204 16:32:48.183299 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:33:02 crc kubenswrapper[4878]: I1204 16:33:02.183461 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:33:02 crc kubenswrapper[4878]: I1204 16:33:02.736854 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e"} Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.354434 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:34:49 crc kubenswrapper[4878]: E1204 16:34:49.355426 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4967d23c-3a62-4dcb-af41-48bb1c278e76" containerName="collect-profiles" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.355440 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4967d23c-3a62-4dcb-af41-48bb1c278e76" containerName="collect-profiles" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.355668 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4967d23c-3a62-4dcb-af41-48bb1c278e76" containerName="collect-profiles" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.357096 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.373198 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.448792 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfpx8\" (UniqueName: \"kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.449065 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.449323 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.552164 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfpx8\" (UniqueName: \"kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.552319 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.552388 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.552820 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.552885 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.574531 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfpx8\" (UniqueName: \"kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8\") pod \"redhat-operators-tfr6n\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:49 crc kubenswrapper[4878]: I1204 16:34:49.686675 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:34:50 crc kubenswrapper[4878]: I1204 16:34:50.242767 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:34:50 crc kubenswrapper[4878]: I1204 16:34:50.826656 4878 generic.go:334] "Generic (PLEG): container finished" podID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerID="372cdc889c7dc1b40d073ffd50760ee44bcaddedf25e25648f115432bd2bc6b0" exitCode=0 Dec 04 16:34:50 crc kubenswrapper[4878]: I1204 16:34:50.827192 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerDied","Data":"372cdc889c7dc1b40d073ffd50760ee44bcaddedf25e25648f115432bd2bc6b0"} Dec 04 16:34:50 crc kubenswrapper[4878]: I1204 16:34:50.827290 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerStarted","Data":"7011c5b404b3b76f4d4158fdb3fb20291e1ef453fb138528f6cae5f41baf07ef"} Dec 04 16:34:50 crc kubenswrapper[4878]: I1204 16:34:50.829724 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:34:51 crc kubenswrapper[4878]: I1204 16:34:51.839084 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerStarted","Data":"45e5feb05fc9b3fa1c05e5d35871abacf046d7a0c232341b8825be7c9f7c6940"} Dec 04 16:34:58 crc kubenswrapper[4878]: I1204 16:34:58.913655 4878 generic.go:334] "Generic (PLEG): container finished" podID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerID="45e5feb05fc9b3fa1c05e5d35871abacf046d7a0c232341b8825be7c9f7c6940" exitCode=0 Dec 04 16:34:58 crc kubenswrapper[4878]: I1204 16:34:58.913741 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerDied","Data":"45e5feb05fc9b3fa1c05e5d35871abacf046d7a0c232341b8825be7c9f7c6940"} Dec 04 16:35:00 crc kubenswrapper[4878]: I1204 16:35:00.949971 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerStarted","Data":"e3daf069bab4f348c25c08889fe54a5cdaf23f4e210ffb0ab9f02b28edee47cc"} Dec 04 16:35:00 crc kubenswrapper[4878]: I1204 16:35:00.977000 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tfr6n" podStartSLOduration=3.488899624 podStartE2EDuration="11.97697443s" podCreationTimestamp="2025-12-04 16:34:49 +0000 UTC" firstStartedPulling="2025-12-04 16:34:50.829348613 +0000 UTC m=+3534.791885569" lastFinishedPulling="2025-12-04 16:34:59.317423419 +0000 UTC m=+3543.279960375" observedRunningTime="2025-12-04 16:35:00.974072287 +0000 UTC m=+3544.936609243" watchObservedRunningTime="2025-12-04 16:35:00.97697443 +0000 UTC m=+3544.939511386" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.591148 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.599596 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.614310 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.715824 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.716236 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.716515 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdklh\" (UniqueName: \"kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.818629 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdklh\" (UniqueName: \"kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.818804 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.818913 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.819448 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.819485 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.844657 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdklh\" (UniqueName: \"kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh\") pod \"community-operators-vkkbd\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:03 crc kubenswrapper[4878]: I1204 16:35:03.920281 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:04 crc kubenswrapper[4878]: W1204 16:35:04.578647 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d4c99a_6699_4b8c_ab50_38ab5e4f9c73.slice/crio-9f6e9328d9ecd1ce1bcaa55ab5ecbcf7c6e36439574ff6ddcca86efc8e628ef9 WatchSource:0}: Error finding container 9f6e9328d9ecd1ce1bcaa55ab5ecbcf7c6e36439574ff6ddcca86efc8e628ef9: Status 404 returned error can't find the container with id 9f6e9328d9ecd1ce1bcaa55ab5ecbcf7c6e36439574ff6ddcca86efc8e628ef9 Dec 04 16:35:04 crc kubenswrapper[4878]: I1204 16:35:04.584526 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:04 crc kubenswrapper[4878]: I1204 16:35:04.993632 4878 generic.go:334] "Generic (PLEG): container finished" podID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerID="3e048ebe8d08f801f857abbd2aa8affc692af1592ae35cf65ac5433db59bbbcf" exitCode=0 Dec 04 16:35:04 crc kubenswrapper[4878]: I1204 16:35:04.993686 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerDied","Data":"3e048ebe8d08f801f857abbd2aa8affc692af1592ae35cf65ac5433db59bbbcf"} Dec 04 16:35:04 crc kubenswrapper[4878]: I1204 16:35:04.993721 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerStarted","Data":"9f6e9328d9ecd1ce1bcaa55ab5ecbcf7c6e36439574ff6ddcca86efc8e628ef9"} Dec 04 16:35:06 crc kubenswrapper[4878]: I1204 16:35:06.005415 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerStarted","Data":"dce28c7c979c886e4abf7a76556ca8c5e0983070add37d1a8a034b4ebb3ff991"} Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.016370 4878 generic.go:334] "Generic (PLEG): container finished" podID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerID="dce28c7c979c886e4abf7a76556ca8c5e0983070add37d1a8a034b4ebb3ff991" exitCode=0 Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.016472 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerDied","Data":"dce28c7c979c886e4abf7a76556ca8c5e0983070add37d1a8a034b4ebb3ff991"} Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.385074 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.387345 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.396382 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.415536 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8f4\" (UniqueName: \"kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.415625 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.415657 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.518089 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8f4\" (UniqueName: \"kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.518150 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.518178 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.518755 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.519431 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.548750 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8f4\" (UniqueName: \"kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4\") pod \"certified-operators-8vjqv\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:07 crc kubenswrapper[4878]: I1204 16:35:07.717457 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:08 crc kubenswrapper[4878]: I1204 16:35:08.041242 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerStarted","Data":"0e43541119f9a5f60603f6305fb2415ffc0d0dd2f09f70b2666a8ab0d2f8d72a"} Dec 04 16:35:08 crc kubenswrapper[4878]: I1204 16:35:08.302840 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vkkbd" podStartSLOduration=2.632155349 podStartE2EDuration="5.302804306s" podCreationTimestamp="2025-12-04 16:35:03 +0000 UTC" firstStartedPulling="2025-12-04 16:35:04.996036781 +0000 UTC m=+3548.958573737" lastFinishedPulling="2025-12-04 16:35:07.666685738 +0000 UTC m=+3551.629222694" observedRunningTime="2025-12-04 16:35:08.077574358 +0000 UTC m=+3552.040111314" watchObservedRunningTime="2025-12-04 16:35:08.302804306 +0000 UTC m=+3552.265341272" Dec 04 16:35:08 crc kubenswrapper[4878]: I1204 16:35:08.305472 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:09 crc kubenswrapper[4878]: I1204 16:35:09.053500 4878 generic.go:334] "Generic (PLEG): container finished" podID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerID="e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c" exitCode=0 Dec 04 16:35:09 crc kubenswrapper[4878]: I1204 16:35:09.053615 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerDied","Data":"e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c"} Dec 04 16:35:09 crc kubenswrapper[4878]: I1204 16:35:09.054650 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerStarted","Data":"5a743fdf7ab3dae66a86e4c6ce4038d1cdf263e8badb1a016ed129212060eafe"} Dec 04 16:35:09 crc kubenswrapper[4878]: I1204 16:35:09.687778 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:09 crc kubenswrapper[4878]: I1204 16:35:09.688129 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:10 crc kubenswrapper[4878]: I1204 16:35:10.072269 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerStarted","Data":"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653"} Dec 04 16:35:10 crc kubenswrapper[4878]: I1204 16:35:10.773429 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tfr6n" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="registry-server" probeResult="failure" output=< Dec 04 16:35:10 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:35:10 crc kubenswrapper[4878]: > Dec 04 16:35:12 crc kubenswrapper[4878]: I1204 16:35:12.093338 4878 generic.go:334] "Generic (PLEG): container finished" podID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerID="dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653" exitCode=0 Dec 04 16:35:12 crc kubenswrapper[4878]: I1204 16:35:12.093392 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerDied","Data":"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653"} Dec 04 16:35:13 crc kubenswrapper[4878]: I1204 16:35:13.922268 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:13 crc kubenswrapper[4878]: I1204 16:35:13.922596 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:14 crc kubenswrapper[4878]: I1204 16:35:14.006560 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:14 crc kubenswrapper[4878]: I1204 16:35:14.197076 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:15 crc kubenswrapper[4878]: I1204 16:35:15.133140 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerStarted","Data":"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165"} Dec 04 16:35:15 crc kubenswrapper[4878]: I1204 16:35:15.153668 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vjqv" podStartSLOduration=2.554822162 podStartE2EDuration="8.153642957s" podCreationTimestamp="2025-12-04 16:35:07 +0000 UTC" firstStartedPulling="2025-12-04 16:35:09.05555635 +0000 UTC m=+3553.018093306" lastFinishedPulling="2025-12-04 16:35:14.654377145 +0000 UTC m=+3558.616914101" observedRunningTime="2025-12-04 16:35:15.151115834 +0000 UTC m=+3559.113652790" watchObservedRunningTime="2025-12-04 16:35:15.153642957 +0000 UTC m=+3559.116179913" Dec 04 16:35:15 crc kubenswrapper[4878]: I1204 16:35:15.567422 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:16 crc kubenswrapper[4878]: I1204 16:35:16.141764 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vkkbd" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="registry-server" containerID="cri-o://0e43541119f9a5f60603f6305fb2415ffc0d0dd2f09f70b2666a8ab0d2f8d72a" gracePeriod=2 Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.159915 4878 generic.go:334] "Generic (PLEG): container finished" podID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerID="0e43541119f9a5f60603f6305fb2415ffc0d0dd2f09f70b2666a8ab0d2f8d72a" exitCode=0 Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.160198 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerDied","Data":"0e43541119f9a5f60603f6305fb2415ffc0d0dd2f09f70b2666a8ab0d2f8d72a"} Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.429439 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.553600 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content\") pod \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.553694 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdklh\" (UniqueName: \"kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh\") pod \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.553857 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities\") pod \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\" (UID: \"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73\") " Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.554761 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities" (OuterVolumeSpecName: "utilities") pod "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" (UID: "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.561261 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh" (OuterVolumeSpecName: "kube-api-access-rdklh") pod "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" (UID: "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73"). InnerVolumeSpecName "kube-api-access-rdklh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.607238 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" (UID: "e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.656202 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.656242 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdklh\" (UniqueName: \"kubernetes.io/projected/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-kube-api-access-rdklh\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.656255 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.717910 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.717982 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:17 crc kubenswrapper[4878]: I1204 16:35:17.778737 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.174779 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vkkbd" event={"ID":"e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73","Type":"ContainerDied","Data":"9f6e9328d9ecd1ce1bcaa55ab5ecbcf7c6e36439574ff6ddcca86efc8e628ef9"} Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.174853 4878 scope.go:117] "RemoveContainer" containerID="0e43541119f9a5f60603f6305fb2415ffc0d0dd2f09f70b2666a8ab0d2f8d72a" Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.174861 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vkkbd" Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.199035 4878 scope.go:117] "RemoveContainer" containerID="dce28c7c979c886e4abf7a76556ca8c5e0983070add37d1a8a034b4ebb3ff991" Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.214130 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.226201 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vkkbd"] Dec 04 16:35:18 crc kubenswrapper[4878]: I1204 16:35:18.241927 4878 scope.go:117] "RemoveContainer" containerID="3e048ebe8d08f801f857abbd2aa8affc692af1592ae35cf65ac5433db59bbbcf" Dec 04 16:35:19 crc kubenswrapper[4878]: I1204 16:35:19.192154 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" path="/var/lib/kubelet/pods/e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73/volumes" Dec 04 16:35:19 crc kubenswrapper[4878]: I1204 16:35:19.744418 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:19 crc kubenswrapper[4878]: I1204 16:35:19.793713 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:21 crc kubenswrapper[4878]: I1204 16:35:21.767579 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:35:21 crc kubenswrapper[4878]: I1204 16:35:21.768707 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tfr6n" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="registry-server" containerID="cri-o://e3daf069bab4f348c25c08889fe54a5cdaf23f4e210ffb0ab9f02b28edee47cc" gracePeriod=2 Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.218638 4878 generic.go:334] "Generic (PLEG): container finished" podID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerID="e3daf069bab4f348c25c08889fe54a5cdaf23f4e210ffb0ab9f02b28edee47cc" exitCode=0 Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.218692 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerDied","Data":"e3daf069bab4f348c25c08889fe54a5cdaf23f4e210ffb0ab9f02b28edee47cc"} Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.219051 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tfr6n" event={"ID":"4b627001-9dba-4526-a1b7-6dca27fecc67","Type":"ContainerDied","Data":"7011c5b404b3b76f4d4158fdb3fb20291e1ef453fb138528f6cae5f41baf07ef"} Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.219087 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7011c5b404b3b76f4d4158fdb3fb20291e1ef453fb138528f6cae5f41baf07ef" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.261886 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.367052 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities\") pod \"4b627001-9dba-4526-a1b7-6dca27fecc67\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.367612 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content\") pod \"4b627001-9dba-4526-a1b7-6dca27fecc67\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.367754 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfpx8\" (UniqueName: \"kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8\") pod \"4b627001-9dba-4526-a1b7-6dca27fecc67\" (UID: \"4b627001-9dba-4526-a1b7-6dca27fecc67\") " Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.367945 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities" (OuterVolumeSpecName: "utilities") pod "4b627001-9dba-4526-a1b7-6dca27fecc67" (UID: "4b627001-9dba-4526-a1b7-6dca27fecc67"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.370371 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.373445 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8" (OuterVolumeSpecName: "kube-api-access-wfpx8") pod "4b627001-9dba-4526-a1b7-6dca27fecc67" (UID: "4b627001-9dba-4526-a1b7-6dca27fecc67"). InnerVolumeSpecName "kube-api-access-wfpx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.459693 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b627001-9dba-4526-a1b7-6dca27fecc67" (UID: "4b627001-9dba-4526-a1b7-6dca27fecc67"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.472339 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b627001-9dba-4526-a1b7-6dca27fecc67-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:22 crc kubenswrapper[4878]: I1204 16:35:22.472396 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfpx8\" (UniqueName: \"kubernetes.io/projected/4b627001-9dba-4526-a1b7-6dca27fecc67-kube-api-access-wfpx8\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:23 crc kubenswrapper[4878]: I1204 16:35:23.269335 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tfr6n" Dec 04 16:35:23 crc kubenswrapper[4878]: I1204 16:35:23.301682 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:35:23 crc kubenswrapper[4878]: I1204 16:35:23.313509 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tfr6n"] Dec 04 16:35:25 crc kubenswrapper[4878]: I1204 16:35:25.191006 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" path="/var/lib/kubelet/pods/4b627001-9dba-4526-a1b7-6dca27fecc67/volumes" Dec 04 16:35:27 crc kubenswrapper[4878]: I1204 16:35:27.770162 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:27 crc kubenswrapper[4878]: I1204 16:35:27.823846 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.315103 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vjqv" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="registry-server" containerID="cri-o://9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165" gracePeriod=2 Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.818246 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.903957 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content\") pod \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.904157 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8f4\" (UniqueName: \"kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4\") pod \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.904297 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities\") pod \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\" (UID: \"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9\") " Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.905099 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities" (OuterVolumeSpecName: "utilities") pod "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" (UID: "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.909932 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4" (OuterVolumeSpecName: "kube-api-access-gw8f4") pod "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" (UID: "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9"). InnerVolumeSpecName "kube-api-access-gw8f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:35:28 crc kubenswrapper[4878]: I1204 16:35:28.957937 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" (UID: "58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.006627 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.006664 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw8f4\" (UniqueName: \"kubernetes.io/projected/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-kube-api-access-gw8f4\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.006677 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.324542 4878 generic.go:334] "Generic (PLEG): container finished" podID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerID="9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165" exitCode=0 Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.324586 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerDied","Data":"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165"} Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.324616 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vjqv" event={"ID":"58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9","Type":"ContainerDied","Data":"5a743fdf7ab3dae66a86e4c6ce4038d1cdf263e8badb1a016ed129212060eafe"} Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.324634 4878 scope.go:117] "RemoveContainer" containerID="9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.324801 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vjqv" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.350818 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.353452 4878 scope.go:117] "RemoveContainer" containerID="dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.363018 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vjqv"] Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.376086 4878 scope.go:117] "RemoveContainer" containerID="e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.417022 4878 scope.go:117] "RemoveContainer" containerID="9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165" Dec 04 16:35:29 crc kubenswrapper[4878]: E1204 16:35:29.417412 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165\": container with ID starting with 9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165 not found: ID does not exist" containerID="9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.417522 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165"} err="failed to get container status \"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165\": rpc error: code = NotFound desc = could not find container \"9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165\": container with ID starting with 9252fb804f07b3392e05647bcb18ef074b9e7cf53b96a1bb8cc218eb183cc165 not found: ID does not exist" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.417607 4878 scope.go:117] "RemoveContainer" containerID="dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653" Dec 04 16:35:29 crc kubenswrapper[4878]: E1204 16:35:29.418016 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653\": container with ID starting with dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653 not found: ID does not exist" containerID="dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.418128 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653"} err="failed to get container status \"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653\": rpc error: code = NotFound desc = could not find container \"dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653\": container with ID starting with dfd57004b68c3f9a5d022f3a68e37f2b061e095d030a31fd892db29c1cd60653 not found: ID does not exist" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.418217 4878 scope.go:117] "RemoveContainer" containerID="e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c" Dec 04 16:35:29 crc kubenswrapper[4878]: E1204 16:35:29.418526 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c\": container with ID starting with e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c not found: ID does not exist" containerID="e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c" Dec 04 16:35:29 crc kubenswrapper[4878]: I1204 16:35:29.418630 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c"} err="failed to get container status \"e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c\": rpc error: code = NotFound desc = could not find container \"e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c\": container with ID starting with e3f1c92d8ae006d5a04c588641831d219df5588228dbe380cf48ae81bb7e806c not found: ID does not exist" Dec 04 16:35:30 crc kubenswrapper[4878]: I1204 16:35:30.840720 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:35:30 crc kubenswrapper[4878]: I1204 16:35:30.841043 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:35:31 crc kubenswrapper[4878]: I1204 16:35:31.209741 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" path="/var/lib/kubelet/pods/58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9/volumes" Dec 04 16:36:00 crc kubenswrapper[4878]: I1204 16:36:00.840049 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:36:00 crc kubenswrapper[4878]: I1204 16:36:00.840577 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:36:30 crc kubenswrapper[4878]: I1204 16:36:30.841114 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:36:30 crc kubenswrapper[4878]: I1204 16:36:30.841627 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:36:30 crc kubenswrapper[4878]: I1204 16:36:30.841690 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:36:30 crc kubenswrapper[4878]: I1204 16:36:30.842801 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:36:30 crc kubenswrapper[4878]: I1204 16:36:30.842910 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e" gracePeriod=600 Dec 04 16:36:31 crc kubenswrapper[4878]: I1204 16:36:31.939059 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e" exitCode=0 Dec 04 16:36:31 crc kubenswrapper[4878]: I1204 16:36:31.939153 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e"} Dec 04 16:36:31 crc kubenswrapper[4878]: I1204 16:36:31.939890 4878 scope.go:117] "RemoveContainer" containerID="346f8670dc684d7b22388a4c40a4b019a4a9cc637b6a554320aa806cd6408d82" Dec 04 16:36:31 crc kubenswrapper[4878]: I1204 16:36:31.941003 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e"} Dec 04 16:36:42 crc kubenswrapper[4878]: I1204 16:36:42.033899 4878 generic.go:334] "Generic (PLEG): container finished" podID="3e394916-5de1-45b1-9e49-246be63a5689" containerID="56eaf0eaaf96e186e63d370885ed5525bb00ebb20f40d6d909feebf6382056c1" exitCode=0 Dec 04 16:36:42 crc kubenswrapper[4878]: I1204 16:36:42.034378 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e394916-5de1-45b1-9e49-246be63a5689","Type":"ContainerDied","Data":"56eaf0eaaf96e186e63d370885ed5525bb00ebb20f40d6d909feebf6382056c1"} Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.440273 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572441 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572508 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572597 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572701 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572736 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572769 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572799 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5tvt\" (UniqueName: \"kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572857 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.572952 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs\") pod \"3e394916-5de1-45b1-9e49-246be63a5689\" (UID: \"3e394916-5de1-45b1-9e49-246be63a5689\") " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.574678 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.575605 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data" (OuterVolumeSpecName: "config-data") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.578399 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.580897 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "test-operator-logs") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.581459 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt" (OuterVolumeSpecName: "kube-api-access-s5tvt") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "kube-api-access-s5tvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.609038 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.611403 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.613115 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.639926 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "3e394916-5de1-45b1-9e49-246be63a5689" (UID: "3e394916-5de1-45b1-9e49-246be63a5689"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.676924 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.676968 4878 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677006 4878 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677018 4878 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677031 4878 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/3e394916-5de1-45b1-9e49-246be63a5689-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677041 4878 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677049 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5tvt\" (UniqueName: \"kubernetes.io/projected/3e394916-5de1-45b1-9e49-246be63a5689-kube-api-access-s5tvt\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677058 4878 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e394916-5de1-45b1-9e49-246be63a5689-config-data\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.677066 4878 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/3e394916-5de1-45b1-9e49-246be63a5689-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.703985 4878 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 04 16:36:43 crc kubenswrapper[4878]: I1204 16:36:43.779750 4878 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 04 16:36:44 crc kubenswrapper[4878]: I1204 16:36:44.052711 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"3e394916-5de1-45b1-9e49-246be63a5689","Type":"ContainerDied","Data":"a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4"} Dec 04 16:36:44 crc kubenswrapper[4878]: I1204 16:36:44.053068 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68819830ca05f996b8b419ea6e17cefbbf0b0b6a28c47b09eae9c1ec95fe0a4" Dec 04 16:36:44 crc kubenswrapper[4878]: I1204 16:36:44.052754 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.111154 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.114915 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.114959 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115001 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115010 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115038 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115048 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115067 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115076 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115099 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115109 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115126 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e394916-5de1-45b1-9e49-246be63a5689" containerName="tempest-tests-tempest-tests-runner" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115139 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e394916-5de1-45b1-9e49-246be63a5689" containerName="tempest-tests-tempest-tests-runner" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115166 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115173 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115206 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115214 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115234 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115243 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="extract-utilities" Dec 04 16:36:46 crc kubenswrapper[4878]: E1204 16:36:46.115257 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.115264 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="extract-content" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.117089 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8d4c99a-6699-4b8c-ab50-38ab5e4f9c73" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.117117 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d5bb59-c5c3-44c4-8c5a-c0af26ddb2d9" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.117172 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e394916-5de1-45b1-9e49-246be63a5689" containerName="tempest-tests-tempest-tests-runner" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.117209 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b627001-9dba-4526-a1b7-6dca27fecc67" containerName="registry-server" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.119500 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.123237 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tgvf9" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.266148 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.266261 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbv7q\" (UniqueName: \"kubernetes.io/projected/186b727a-be9a-401e-9ec6-fc48097d479a-kube-api-access-nbv7q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.293294 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.368042 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbv7q\" (UniqueName: \"kubernetes.io/projected/186b727a-be9a-401e-9ec6-fc48097d479a-kube-api-access-nbv7q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.368628 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.369487 4878 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.387740 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbv7q\" (UniqueName: \"kubernetes.io/projected/186b727a-be9a-401e-9ec6-fc48097d479a-kube-api-access-nbv7q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.404783 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"186b727a-be9a-401e-9ec6-fc48097d479a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:46 crc kubenswrapper[4878]: I1204 16:36:46.534539 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 04 16:36:47 crc kubenswrapper[4878]: I1204 16:36:47.025230 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 04 16:36:47 crc kubenswrapper[4878]: I1204 16:36:47.078762 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"186b727a-be9a-401e-9ec6-fc48097d479a","Type":"ContainerStarted","Data":"b778feb72041c971c031768c7907e1c96131c5d1aa5e6de7ad1c85db9ee5f1f0"} Dec 04 16:36:49 crc kubenswrapper[4878]: I1204 16:36:49.100645 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"186b727a-be9a-401e-9ec6-fc48097d479a","Type":"ContainerStarted","Data":"5454f8522a40306a6581048d25047f95424eedd331c0f0d66aa61f1d30cc9857"} Dec 04 16:36:49 crc kubenswrapper[4878]: I1204 16:36:49.121777 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.170884558 podStartE2EDuration="3.121758423s" podCreationTimestamp="2025-12-04 16:36:46 +0000 UTC" firstStartedPulling="2025-12-04 16:36:47.030166205 +0000 UTC m=+3650.992703161" lastFinishedPulling="2025-12-04 16:36:47.98104001 +0000 UTC m=+3651.943577026" observedRunningTime="2025-12-04 16:36:49.11722311 +0000 UTC m=+3653.079760086" watchObservedRunningTime="2025-12-04 16:36:49.121758423 +0000 UTC m=+3653.084295369" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.690145 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.693925 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.709086 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.816396 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.816719 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.817028 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7n4v\" (UniqueName: \"kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.918754 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7n4v\" (UniqueName: \"kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.918946 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.919163 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.919566 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.919625 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:10 crc kubenswrapper[4878]: I1204 16:37:10.937555 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7n4v\" (UniqueName: \"kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v\") pod \"redhat-marketplace-chv6w\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.032111 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.521028 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.708674 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4sjg/must-gather-9q6qt"] Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.710543 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.713154 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4sjg"/"openshift-service-ca.crt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.722178 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v4sjg"/"kube-root-ca.crt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.736542 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4sjg/must-gather-9q6qt"] Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.836865 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mp8r\" (UniqueName: \"kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.837124 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.938860 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.939023 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mp8r\" (UniqueName: \"kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.939890 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:11 crc kubenswrapper[4878]: I1204 16:37:11.962446 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mp8r\" (UniqueName: \"kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r\") pod \"must-gather-9q6qt\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:12 crc kubenswrapper[4878]: I1204 16:37:12.034694 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:37:12 crc kubenswrapper[4878]: I1204 16:37:12.328854 4878 generic.go:334] "Generic (PLEG): container finished" podID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerID="33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26" exitCode=0 Dec 04 16:37:12 crc kubenswrapper[4878]: I1204 16:37:12.328928 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerDied","Data":"33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26"} Dec 04 16:37:12 crc kubenswrapper[4878]: I1204 16:37:12.328956 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerStarted","Data":"8fb5bcbf1c1902bc28f9245ccbc62052a803a82329c1a656af11fd3517cb0e15"} Dec 04 16:37:12 crc kubenswrapper[4878]: W1204 16:37:12.503942 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde4f1dff_1033_43c2_96b8_8ff70fbf7f41.slice/crio-521c3575e30d0a0145c15fca5ec7fe2c2e9de6b2604fccebdae30ec53370032b WatchSource:0}: Error finding container 521c3575e30d0a0145c15fca5ec7fe2c2e9de6b2604fccebdae30ec53370032b: Status 404 returned error can't find the container with id 521c3575e30d0a0145c15fca5ec7fe2c2e9de6b2604fccebdae30ec53370032b Dec 04 16:37:12 crc kubenswrapper[4878]: I1204 16:37:12.505152 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v4sjg/must-gather-9q6qt"] Dec 04 16:37:13 crc kubenswrapper[4878]: I1204 16:37:13.342891 4878 generic.go:334] "Generic (PLEG): container finished" podID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerID="a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b" exitCode=0 Dec 04 16:37:13 crc kubenswrapper[4878]: I1204 16:37:13.342924 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerDied","Data":"a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b"} Dec 04 16:37:13 crc kubenswrapper[4878]: I1204 16:37:13.347090 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" event={"ID":"de4f1dff-1033-43c2-96b8-8ff70fbf7f41","Type":"ContainerStarted","Data":"521c3575e30d0a0145c15fca5ec7fe2c2e9de6b2604fccebdae30ec53370032b"} Dec 04 16:37:14 crc kubenswrapper[4878]: I1204 16:37:14.361024 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerStarted","Data":"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f"} Dec 04 16:37:14 crc kubenswrapper[4878]: I1204 16:37:14.392038 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-chv6w" podStartSLOduration=2.707738514 podStartE2EDuration="4.392009041s" podCreationTimestamp="2025-12-04 16:37:10 +0000 UTC" firstStartedPulling="2025-12-04 16:37:12.331363442 +0000 UTC m=+3676.293900388" lastFinishedPulling="2025-12-04 16:37:14.015633959 +0000 UTC m=+3677.978170915" observedRunningTime="2025-12-04 16:37:14.381424607 +0000 UTC m=+3678.343961583" watchObservedRunningTime="2025-12-04 16:37:14.392009041 +0000 UTC m=+3678.354545997" Dec 04 16:37:17 crc kubenswrapper[4878]: I1204 16:37:17.397372 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" event={"ID":"de4f1dff-1033-43c2-96b8-8ff70fbf7f41","Type":"ContainerStarted","Data":"7de39805b9628eca961445b35c865bbb9d474af01ed855cfab044acca22aaa59"} Dec 04 16:37:17 crc kubenswrapper[4878]: I1204 16:37:17.397922 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" event={"ID":"de4f1dff-1033-43c2-96b8-8ff70fbf7f41","Type":"ContainerStarted","Data":"eed4721768688a135606a8ad6ba13a836e810d77ebe30937ce61bfe5976c369a"} Dec 04 16:37:17 crc kubenswrapper[4878]: I1204 16:37:17.415510 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" podStartSLOduration=2.017347362 podStartE2EDuration="6.415481182s" podCreationTimestamp="2025-12-04 16:37:11 +0000 UTC" firstStartedPulling="2025-12-04 16:37:12.505921558 +0000 UTC m=+3676.468458524" lastFinishedPulling="2025-12-04 16:37:16.904055398 +0000 UTC m=+3680.866592344" observedRunningTime="2025-12-04 16:37:17.411320349 +0000 UTC m=+3681.373857325" watchObservedRunningTime="2025-12-04 16:37:17.415481182 +0000 UTC m=+3681.378018138" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.032840 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.033465 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.090624 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.216123 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-28tcv"] Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.217898 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.221360 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v4sjg"/"default-dockercfg-896gr" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.378034 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.378112 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chq8l\" (UniqueName: \"kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.479795 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.479859 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chq8l\" (UniqueName: \"kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.480032 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.507994 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chq8l\" (UniqueName: \"kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l\") pod \"crc-debug-28tcv\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.542529 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.555566 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:21 crc kubenswrapper[4878]: W1204 16:37:21.591323 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39bc7964_7f20_4378_bfd9_05e965936283.slice/crio-bfa1041934a4400bc929ae3cca6ee4cf78c86da59b522b7b970de82a4461d1a5 WatchSource:0}: Error finding container bfa1041934a4400bc929ae3cca6ee4cf78c86da59b522b7b970de82a4461d1a5: Status 404 returned error can't find the container with id bfa1041934a4400bc929ae3cca6ee4cf78c86da59b522b7b970de82a4461d1a5 Dec 04 16:37:21 crc kubenswrapper[4878]: I1204 16:37:21.632296 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:22 crc kubenswrapper[4878]: I1204 16:37:22.489225 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" event={"ID":"39bc7964-7f20-4378-bfd9-05e965936283","Type":"ContainerStarted","Data":"bfa1041934a4400bc929ae3cca6ee4cf78c86da59b522b7b970de82a4461d1a5"} Dec 04 16:37:23 crc kubenswrapper[4878]: I1204 16:37:23.498951 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-chv6w" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="registry-server" containerID="cri-o://b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f" gracePeriod=2 Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.096151 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.135916 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7n4v\" (UniqueName: \"kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v\") pod \"acda4635-0fd8-47c0-a31d-c10a46251cd4\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.136023 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities\") pod \"acda4635-0fd8-47c0-a31d-c10a46251cd4\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.136259 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content\") pod \"acda4635-0fd8-47c0-a31d-c10a46251cd4\" (UID: \"acda4635-0fd8-47c0-a31d-c10a46251cd4\") " Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.137469 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities" (OuterVolumeSpecName: "utilities") pod "acda4635-0fd8-47c0-a31d-c10a46251cd4" (UID: "acda4635-0fd8-47c0-a31d-c10a46251cd4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.158134 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v" (OuterVolumeSpecName: "kube-api-access-r7n4v") pod "acda4635-0fd8-47c0-a31d-c10a46251cd4" (UID: "acda4635-0fd8-47c0-a31d-c10a46251cd4"). InnerVolumeSpecName "kube-api-access-r7n4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.228453 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "acda4635-0fd8-47c0-a31d-c10a46251cd4" (UID: "acda4635-0fd8-47c0-a31d-c10a46251cd4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.238479 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.238513 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/acda4635-0fd8-47c0-a31d-c10a46251cd4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.238526 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7n4v\" (UniqueName: \"kubernetes.io/projected/acda4635-0fd8-47c0-a31d-c10a46251cd4-kube-api-access-r7n4v\") on node \"crc\" DevicePath \"\"" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.524598 4878 generic.go:334] "Generic (PLEG): container finished" podID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerID="b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f" exitCode=0 Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.524669 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerDied","Data":"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f"} Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.524712 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-chv6w" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.524747 4878 scope.go:117] "RemoveContainer" containerID="b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.524729 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-chv6w" event={"ID":"acda4635-0fd8-47c0-a31d-c10a46251cd4","Type":"ContainerDied","Data":"8fb5bcbf1c1902bc28f9245ccbc62052a803a82329c1a656af11fd3517cb0e15"} Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.562639 4878 scope.go:117] "RemoveContainer" containerID="a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.563347 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.577626 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-chv6w"] Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.589733 4878 scope.go:117] "RemoveContainer" containerID="33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.673264 4878 scope.go:117] "RemoveContainer" containerID="b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f" Dec 04 16:37:24 crc kubenswrapper[4878]: E1204 16:37:24.675133 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f\": container with ID starting with b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f not found: ID does not exist" containerID="b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.675199 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f"} err="failed to get container status \"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f\": rpc error: code = NotFound desc = could not find container \"b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f\": container with ID starting with b87d50471e8b98503199a1a8d4835b8e28b7b858343abc64612ac341742bec8f not found: ID does not exist" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.675237 4878 scope.go:117] "RemoveContainer" containerID="a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b" Dec 04 16:37:24 crc kubenswrapper[4878]: E1204 16:37:24.675664 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b\": container with ID starting with a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b not found: ID does not exist" containerID="a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.675707 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b"} err="failed to get container status \"a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b\": rpc error: code = NotFound desc = could not find container \"a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b\": container with ID starting with a51bb6fe95c34a7bd1fe0485587a44eea019a3693e6c7cc002962244cbe0a01b not found: ID does not exist" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.675735 4878 scope.go:117] "RemoveContainer" containerID="33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26" Dec 04 16:37:24 crc kubenswrapper[4878]: E1204 16:37:24.677565 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26\": container with ID starting with 33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26 not found: ID does not exist" containerID="33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26" Dec 04 16:37:24 crc kubenswrapper[4878]: I1204 16:37:24.677629 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26"} err="failed to get container status \"33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26\": rpc error: code = NotFound desc = could not find container \"33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26\": container with ID starting with 33600b1c7f78f3d0fb9f9b2f81d39b710e2cebfb63a6eb17531f9e16d4504e26 not found: ID does not exist" Dec 04 16:37:25 crc kubenswrapper[4878]: I1204 16:37:25.192474 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" path="/var/lib/kubelet/pods/acda4635-0fd8-47c0-a31d-c10a46251cd4/volumes" Dec 04 16:37:34 crc kubenswrapper[4878]: I1204 16:37:34.641482 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" event={"ID":"39bc7964-7f20-4378-bfd9-05e965936283","Type":"ContainerStarted","Data":"2e4a9a05e276317d7340614a3fb837d454269da2df4cac43a8fd5ca8bbe52116"} Dec 04 16:37:34 crc kubenswrapper[4878]: I1204 16:37:34.673547 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" podStartSLOduration=1.738336506 podStartE2EDuration="13.673520956s" podCreationTimestamp="2025-12-04 16:37:21 +0000 UTC" firstStartedPulling="2025-12-04 16:37:21.593761758 +0000 UTC m=+3685.556298714" lastFinishedPulling="2025-12-04 16:37:33.528946208 +0000 UTC m=+3697.491483164" observedRunningTime="2025-12-04 16:37:34.660326774 +0000 UTC m=+3698.622863730" watchObservedRunningTime="2025-12-04 16:37:34.673520956 +0000 UTC m=+3698.636057912" Dec 04 16:38:20 crc kubenswrapper[4878]: I1204 16:38:20.106093 4878 generic.go:334] "Generic (PLEG): container finished" podID="39bc7964-7f20-4378-bfd9-05e965936283" containerID="2e4a9a05e276317d7340614a3fb837d454269da2df4cac43a8fd5ca8bbe52116" exitCode=0 Dec 04 16:38:20 crc kubenswrapper[4878]: I1204 16:38:20.106218 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" event={"ID":"39bc7964-7f20-4378-bfd9-05e965936283","Type":"ContainerDied","Data":"2e4a9a05e276317d7340614a3fb837d454269da2df4cac43a8fd5ca8bbe52116"} Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.240477 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.282259 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-28tcv"] Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.293394 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-28tcv"] Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.346600 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host\") pod \"39bc7964-7f20-4378-bfd9-05e965936283\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.346735 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chq8l\" (UniqueName: \"kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l\") pod \"39bc7964-7f20-4378-bfd9-05e965936283\" (UID: \"39bc7964-7f20-4378-bfd9-05e965936283\") " Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.346744 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host" (OuterVolumeSpecName: "host") pod "39bc7964-7f20-4378-bfd9-05e965936283" (UID: "39bc7964-7f20-4378-bfd9-05e965936283"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.347257 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/39bc7964-7f20-4378-bfd9-05e965936283-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.360190 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l" (OuterVolumeSpecName: "kube-api-access-chq8l") pod "39bc7964-7f20-4378-bfd9-05e965936283" (UID: "39bc7964-7f20-4378-bfd9-05e965936283"). InnerVolumeSpecName "kube-api-access-chq8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:38:21 crc kubenswrapper[4878]: I1204 16:38:21.449596 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chq8l\" (UniqueName: \"kubernetes.io/projected/39bc7964-7f20-4378-bfd9-05e965936283-kube-api-access-chq8l\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.130667 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa1041934a4400bc929ae3cca6ee4cf78c86da59b522b7b970de82a4461d1a5" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.131380 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-28tcv" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.674881 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-49zpn"] Dec 04 16:38:22 crc kubenswrapper[4878]: E1204 16:38:22.676995 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="registry-server" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677121 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="registry-server" Dec 04 16:38:22 crc kubenswrapper[4878]: E1204 16:38:22.677215 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bc7964-7f20-4378-bfd9-05e965936283" containerName="container-00" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677283 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bc7964-7f20-4378-bfd9-05e965936283" containerName="container-00" Dec 04 16:38:22 crc kubenswrapper[4878]: E1204 16:38:22.677378 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="extract-content" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677445 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="extract-content" Dec 04 16:38:22 crc kubenswrapper[4878]: E1204 16:38:22.677531 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="extract-utilities" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677611 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="extract-utilities" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677919 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bc7964-7f20-4378-bfd9-05e965936283" containerName="container-00" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.677992 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="acda4635-0fd8-47c0-a31d-c10a46251cd4" containerName="registry-server" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.679369 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.686243 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v4sjg"/"default-dockercfg-896gr" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.778432 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.779107 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8dpq\" (UniqueName: \"kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.881162 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.881501 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.881559 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8dpq\" (UniqueName: \"kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:22 crc kubenswrapper[4878]: I1204 16:38:22.899544 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8dpq\" (UniqueName: \"kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq\") pod \"crc-debug-49zpn\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:23 crc kubenswrapper[4878]: I1204 16:38:23.000354 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:23 crc kubenswrapper[4878]: I1204 16:38:23.143035 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" event={"ID":"5252c2e5-4822-4325-93fe-720995a6c76f","Type":"ContainerStarted","Data":"0e9803961d8633ca46aa8a8d442e62411aa1d097156d3f2692ce4f25c82679a0"} Dec 04 16:38:23 crc kubenswrapper[4878]: I1204 16:38:23.190615 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bc7964-7f20-4378-bfd9-05e965936283" path="/var/lib/kubelet/pods/39bc7964-7f20-4378-bfd9-05e965936283/volumes" Dec 04 16:38:24 crc kubenswrapper[4878]: I1204 16:38:24.153664 4878 generic.go:334] "Generic (PLEG): container finished" podID="5252c2e5-4822-4325-93fe-720995a6c76f" containerID="2ba81ab497e5df2e94cc9a8e5557fadf4797a2cb12ffcdf877710945c3c92859" exitCode=0 Dec 04 16:38:24 crc kubenswrapper[4878]: I1204 16:38:24.153717 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" event={"ID":"5252c2e5-4822-4325-93fe-720995a6c76f","Type":"ContainerDied","Data":"2ba81ab497e5df2e94cc9a8e5557fadf4797a2cb12ffcdf877710945c3c92859"} Dec 04 16:38:24 crc kubenswrapper[4878]: I1204 16:38:24.692055 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-49zpn"] Dec 04 16:38:24 crc kubenswrapper[4878]: I1204 16:38:24.701003 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-49zpn"] Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.274640 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.335021 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8dpq\" (UniqueName: \"kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq\") pod \"5252c2e5-4822-4325-93fe-720995a6c76f\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.335544 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host\") pod \"5252c2e5-4822-4325-93fe-720995a6c76f\" (UID: \"5252c2e5-4822-4325-93fe-720995a6c76f\") " Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.335658 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host" (OuterVolumeSpecName: "host") pod "5252c2e5-4822-4325-93fe-720995a6c76f" (UID: "5252c2e5-4822-4325-93fe-720995a6c76f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.336334 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5252c2e5-4822-4325-93fe-720995a6c76f-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.344687 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq" (OuterVolumeSpecName: "kube-api-access-d8dpq") pod "5252c2e5-4822-4325-93fe-720995a6c76f" (UID: "5252c2e5-4822-4325-93fe-720995a6c76f"). InnerVolumeSpecName "kube-api-access-d8dpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.438329 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8dpq\" (UniqueName: \"kubernetes.io/projected/5252c2e5-4822-4325-93fe-720995a6c76f-kube-api-access-d8dpq\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.849001 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-vw7h9"] Dec 04 16:38:25 crc kubenswrapper[4878]: E1204 16:38:25.849483 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5252c2e5-4822-4325-93fe-720995a6c76f" containerName="container-00" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.849504 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="5252c2e5-4822-4325-93fe-720995a6c76f" containerName="container-00" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.849719 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="5252c2e5-4822-4325-93fe-720995a6c76f" containerName="container-00" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.850535 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.947942 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:25 crc kubenswrapper[4878]: I1204 16:38:25.947994 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx2qp\" (UniqueName: \"kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.050727 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.050802 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx2qp\" (UniqueName: \"kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.050845 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.072821 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx2qp\" (UniqueName: \"kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp\") pod \"crc-debug-vw7h9\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.173029 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.174464 4878 scope.go:117] "RemoveContainer" containerID="2ba81ab497e5df2e94cc9a8e5557fadf4797a2cb12ffcdf877710945c3c92859" Dec 04 16:38:26 crc kubenswrapper[4878]: I1204 16:38:26.174582 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-49zpn" Dec 04 16:38:26 crc kubenswrapper[4878]: W1204 16:38:26.220610 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod223d63ae_f95f_493e_a547_bb551fcb706f.slice/crio-0dcb3929ed6ee7d2444bf4ac12ef20f42ce639fe77b05bc119f2e3bde1987160 WatchSource:0}: Error finding container 0dcb3929ed6ee7d2444bf4ac12ef20f42ce639fe77b05bc119f2e3bde1987160: Status 404 returned error can't find the container with id 0dcb3929ed6ee7d2444bf4ac12ef20f42ce639fe77b05bc119f2e3bde1987160 Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.187266 4878 generic.go:334] "Generic (PLEG): container finished" podID="223d63ae-f95f-493e-a547-bb551fcb706f" containerID="71a96d79c7e4fd564f12a16917001e3edaae1f37e08b61716c1ca6f3f3262148" exitCode=0 Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.192508 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5252c2e5-4822-4325-93fe-720995a6c76f" path="/var/lib/kubelet/pods/5252c2e5-4822-4325-93fe-720995a6c76f/volumes" Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.193130 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" event={"ID":"223d63ae-f95f-493e-a547-bb551fcb706f","Type":"ContainerDied","Data":"71a96d79c7e4fd564f12a16917001e3edaae1f37e08b61716c1ca6f3f3262148"} Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.193177 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" event={"ID":"223d63ae-f95f-493e-a547-bb551fcb706f","Type":"ContainerStarted","Data":"0dcb3929ed6ee7d2444bf4ac12ef20f42ce639fe77b05bc119f2e3bde1987160"} Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.236118 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-vw7h9"] Dec 04 16:38:27 crc kubenswrapper[4878]: I1204 16:38:27.245233 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4sjg/crc-debug-vw7h9"] Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.331609 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.396494 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx2qp\" (UniqueName: \"kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp\") pod \"223d63ae-f95f-493e-a547-bb551fcb706f\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.396859 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host\") pod \"223d63ae-f95f-493e-a547-bb551fcb706f\" (UID: \"223d63ae-f95f-493e-a547-bb551fcb706f\") " Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.396932 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host" (OuterVolumeSpecName: "host") pod "223d63ae-f95f-493e-a547-bb551fcb706f" (UID: "223d63ae-f95f-493e-a547-bb551fcb706f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.397462 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/223d63ae-f95f-493e-a547-bb551fcb706f-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.403040 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp" (OuterVolumeSpecName: "kube-api-access-bx2qp") pod "223d63ae-f95f-493e-a547-bb551fcb706f" (UID: "223d63ae-f95f-493e-a547-bb551fcb706f"). InnerVolumeSpecName "kube-api-access-bx2qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:38:28 crc kubenswrapper[4878]: I1204 16:38:28.499561 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx2qp\" (UniqueName: \"kubernetes.io/projected/223d63ae-f95f-493e-a547-bb551fcb706f-kube-api-access-bx2qp\") on node \"crc\" DevicePath \"\"" Dec 04 16:38:29 crc kubenswrapper[4878]: I1204 16:38:29.194397 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="223d63ae-f95f-493e-a547-bb551fcb706f" path="/var/lib/kubelet/pods/223d63ae-f95f-493e-a547-bb551fcb706f/volumes" Dec 04 16:38:29 crc kubenswrapper[4878]: I1204 16:38:29.212549 4878 scope.go:117] "RemoveContainer" containerID="71a96d79c7e4fd564f12a16917001e3edaae1f37e08b61716c1ca6f3f3262148" Dec 04 16:38:29 crc kubenswrapper[4878]: I1204 16:38:29.212723 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/crc-debug-vw7h9" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.135584 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f87cb9798-k84k9_a95965d0-357e-422a-ab31-186d9dce897b/barbican-api/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.258252 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f87cb9798-k84k9_a95965d0-357e-422a-ab31-186d9dce897b/barbican-api-log/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.341794 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c7d97c576-6crcc_9a85aaed-250a-44a2-aa46-3ca586b53e2b/barbican-keystone-listener/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.443312 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c7d97c576-6crcc_9a85aaed-250a-44a2-aa46-3ca586b53e2b/barbican-keystone-listener-log/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.589798 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5958c7964f-4fxmd_a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0/barbican-worker-log/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.604001 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5958c7964f-4fxmd_a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0/barbican-worker/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.778969 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl_844663ab-0b83-4d6a-9493-b8ce0743f963/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.883289 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/ceilometer-central-agent/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.930792 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/ceilometer-notification-agent/0.log" Dec 04 16:38:43 crc kubenswrapper[4878]: I1204 16:38:43.996943 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/proxy-httpd/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.073547 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/sg-core/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.190446 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b95fffa-975c-44f0-ae14-d0ac3bd06053/cinder-api/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.226606 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b95fffa-975c-44f0-ae14-d0ac3bd06053/cinder-api-log/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.424038 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2640581-49bc-496a-8b18-01d492ff96dc/probe/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.472768 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2640581-49bc-496a-8b18-01d492ff96dc/cinder-scheduler/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.605412 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj_992af669-26c3-4266-bf3d-023460cf30b3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.676045 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pp245_a0a7ed48-a6ca-45c3-9d33-2ebce62512b3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.845906 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/init/0.log" Dec 04 16:38:44 crc kubenswrapper[4878]: I1204 16:38:44.989062 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/init/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.082001 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl_56812292-222d-4323-86ad-30023b9862b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.089561 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/dnsmasq-dns/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.355693 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d695709a-c328-42c1-8193-20cca3f504bc/glance-httpd/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.360043 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d695709a-c328-42c1-8193-20cca3f504bc/glance-log/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.530914 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_86a592b1-9417-4993-9470-f6077542c0af/glance-httpd/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.573767 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_86a592b1-9417-4993-9470-f6077542c0af/glance-log/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.682925 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c56cbf696-wj6zc_63307580-b46f-421f-bbf5-52eafde58f6c/horizon/0.log" Dec 04 16:38:45 crc kubenswrapper[4878]: I1204 16:38:45.921549 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn_7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.059218 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c56cbf696-wj6zc_63307580-b46f-421f-bbf5-52eafde58f6c/horizon-log/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.149077 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-t2mdt_9e312b69-8ad2-408e-9303-bfec15db442e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.424426 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-96bf8d55-s7dcq_59f69e03-b3e6-49bf-9b26-e10703659609/keystone-api/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.449290 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29414401-99rqr_e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c/keystone-cron/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.626314 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_07cca0ce-2eda-43c8-94fa-3a307883e42a/kube-state-metrics/0.log" Dec 04 16:38:46 crc kubenswrapper[4878]: I1204 16:38:46.683724 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k2n55_7db5ad3f-e745-4eca-92d8-290800fe6115/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:47 crc kubenswrapper[4878]: I1204 16:38:47.156858 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7d556697-lmlhb_0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53/neutron-api/0.log" Dec 04 16:38:47 crc kubenswrapper[4878]: I1204 16:38:47.163990 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7d556697-lmlhb_0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53/neutron-httpd/0.log" Dec 04 16:38:47 crc kubenswrapper[4878]: I1204 16:38:47.236381 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts_96e5fe1c-6d27-40bd-aea8-b89c718d54c0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:47 crc kubenswrapper[4878]: I1204 16:38:47.851647 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c91204a7-98e8-4285-93f6-d6950295491c/nova-api-log/0.log" Dec 04 16:38:47 crc kubenswrapper[4878]: I1204 16:38:47.875272 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_89cdfafa-37af-4e84-8dbf-a9022767eab6/nova-cell0-conductor-conductor/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.007988 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c91204a7-98e8-4285-93f6-d6950295491c/nova-api-api/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.069259 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2fe85dc9-a6b6-40ec-90fd-fd5fab214c24/nova-cell1-conductor-conductor/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.166057 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_efc8fa35-c810-4c40-8a4c-3a4fee3651ab/nova-cell1-novncproxy-novncproxy/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.368145 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p9k2q_c5c443b7-778f-46ba-9ec4-312767ec3a27/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.522686 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b3a8cd5b-1c6a-4278-b66d-a0b0802e1546/nova-metadata-log/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.819484 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d111affc-9e51-4123-8f21-138b844702db/nova-scheduler-scheduler/0.log" Dec 04 16:38:48 crc kubenswrapper[4878]: I1204 16:38:48.875357 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/mysql-bootstrap/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.072657 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/mysql-bootstrap/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.181233 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/galera/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.340380 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/mysql-bootstrap/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.527668 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/mysql-bootstrap/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.554452 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/galera/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.872495 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b3a8cd5b-1c6a-4278-b66d-a0b0802e1546/nova-metadata-metadata/0.log" Dec 04 16:38:49 crc kubenswrapper[4878]: I1204 16:38:49.906116 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c3b24340-938a-4130-a002-841b398d49c5/openstackclient/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.116763 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqhwx_63b68bea-2a97-49cb-bba4-86c730468f8d/openstack-network-exporter/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.152537 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server-init/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.339417 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server-init/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.358773 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovs-vswitchd/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.366347 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.551063 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qt5xl_76972b0d-60b4-427a-83fa-69d53c8c1e64/ovn-controller/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.637271 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dqnbd_c4743038-ff21-4107-8e3c-d576536e0c3c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.798632 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1cade2c0-4d05-4beb-9bfb-003446587673/openstack-network-exporter/0.log" Dec 04 16:38:50 crc kubenswrapper[4878]: I1204 16:38:50.886211 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1cade2c0-4d05-4beb-9bfb-003446587673/ovn-northd/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.050644 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab2faf38-cbc6-4141-8553-58bad8a0675f/openstack-network-exporter/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.055464 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab2faf38-cbc6-4141-8553-58bad8a0675f/ovsdbserver-nb/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.312763 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7834dc6-68d7-4afb-bbcd-d247294ba85b/ovsdbserver-sb/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.314099 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7834dc6-68d7-4afb-bbcd-d247294ba85b/openstack-network-exporter/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.436382 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5668dccb-gv79r_460bb923-1a77-4759-98cb-b6262047cc27/placement-api/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.636758 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5668dccb-gv79r_460bb923-1a77-4759-98cb-b6262047cc27/placement-log/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.646588 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/setup-container/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.896714 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/rabbitmq/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.919244 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/setup-container/0.log" Dec 04 16:38:51 crc kubenswrapper[4878]: I1204 16:38:51.955896 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/setup-container/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.201891 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/setup-container/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.203803 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/rabbitmq/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.226550 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4_633ccb62-7bfe-48dc-bd16-1a042f8d57f6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.442838 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2wlpr_fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.550822 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7_5fec4d01-1d56-4db6-ac76-cb8e2b62a659/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.740642 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b6xnw_b35793af-eea9-4355-8bd0-8a7aec7b412a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:52 crc kubenswrapper[4878]: I1204 16:38:52.798624 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jnbpb_4831aa21-6bfc-415f-b6e1-53a350cf923b/ssh-known-hosts-edpm-deployment/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.041023 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d68fcf6bc-v5rvx_07b8e6cd-af0c-4c2d-97fb-bee728d728a8/proxy-server/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.110328 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d68fcf6bc-v5rvx_07b8e6cd-af0c-4c2d-97fb-bee728d728a8/proxy-httpd/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.253265 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-55hsg_947bbc4c-f673-433d-bc78-4411fea88516/swift-ring-rebalance/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.344374 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-auditor/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.494375 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-reaper/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.520556 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-server/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.521548 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-replicator/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.550938 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-auditor/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.740901 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-server/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.767405 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-updater/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.771794 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-replicator/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.808685 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-auditor/0.log" Dec 04 16:38:53 crc kubenswrapper[4878]: I1204 16:38:53.971049 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-expirer/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.017899 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-server/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.028770 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-updater/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.072317 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-replicator/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.223099 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/rsync/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.258609 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/swift-recon-cron/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.373032 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk_5c550a94-c515-45cc-9c92-d7b9043486ef/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.483944 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3e394916-5de1-45b1-9e49-246be63a5689/tempest-tests-tempest-tests-runner/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.656789 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_186b727a-be9a-401e-9ec6-fc48097d479a/test-operator-logs-container/0.log" Dec 04 16:38:54 crc kubenswrapper[4878]: I1204 16:38:54.778523 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2_e6b0a783-a808-4e9d-a207-6a4c56b36cd9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:39:00 crc kubenswrapper[4878]: I1204 16:39:00.840660 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:39:00 crc kubenswrapper[4878]: I1204 16:39:00.841312 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:39:04 crc kubenswrapper[4878]: I1204 16:39:04.499981 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_065cfa24-566e-4cb0-8827-acbc50620fee/memcached/0.log" Dec 04 16:39:19 crc kubenswrapper[4878]: I1204 16:39:19.813774 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.002856 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.045614 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.055049 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.233181 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.235181 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.262402 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/extract/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.412205 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-tbkt6_8553cda1-13f9-4f6f-b301-0f757fbf0021/kube-rbac-proxy/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.515576 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-tbkt6_8553cda1-13f9-4f6f-b301-0f757fbf0021/manager/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.552291 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d6d6c_a4b2d922-f684-4b6f-93dc-f717d2ece304/kube-rbac-proxy/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.684723 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d6d6c_a4b2d922-f684-4b6f-93dc-f717d2ece304/manager/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.742699 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-tm72f_69b41a1e-5d38-4364-97bf-af19372d6324/kube-rbac-proxy/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.783323 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-tm72f_69b41a1e-5d38-4364-97bf-af19372d6324/manager/0.log" Dec 04 16:39:20 crc kubenswrapper[4878]: I1204 16:39:20.888355 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4bdd7_504d742f-8fe2-4006-b94e-bea669f69743/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.032224 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4bdd7_504d742f-8fe2-4006-b94e-bea669f69743/manager/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.118682 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-ghx29_be55b657-228b-4eef-8047-1d4c2577c529/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.201282 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-ghx29_be55b657-228b-4eef-8047-1d4c2577c529/manager/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.235708 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spbj8_8b665720-1363-4671-8211-b91712e627df/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.362821 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spbj8_8b665720-1363-4671-8211-b91712e627df/manager/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.435382 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-97pcj_80bb52cf-c5dd-40ef-b4bf-657d731ad9bc/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.635577 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bh7x6_1c586b36-c4f0-4de4-8616-ed14769e76a1/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.677057 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-97pcj_80bb52cf-c5dd-40ef-b4bf-657d731ad9bc/manager/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.692043 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bh7x6_1c586b36-c4f0-4de4-8616-ed14769e76a1/manager/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.854958 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-974fj_1880d469-6774-4848-9df9-31bfd93bc699/kube-rbac-proxy/0.log" Dec 04 16:39:21 crc kubenswrapper[4878]: I1204 16:39:21.985169 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-974fj_1880d469-6774-4848-9df9-31bfd93bc699/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.069644 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8wvf6_95fa2571-c576-4132-b55a-cb1211301ce8/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.124763 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8wvf6_95fa2571-c576-4132-b55a-cb1211301ce8/kube-rbac-proxy/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.173638 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nbqqp_f925d486-d890-44dc-a416-d976e8b7d188/kube-rbac-proxy/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.287672 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nbqqp_f925d486-d890-44dc-a416-d976e8b7d188/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.379691 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-cq22h_fb61b1d4-aeeb-4526-8515-4d647d61aa9e/kube-rbac-proxy/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.426777 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-cq22h_fb61b1d4-aeeb-4526-8515-4d647d61aa9e/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.553951 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wrss6_e3e80c29-b107-4969-93d7-e305e1c7eaa2/kube-rbac-proxy/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.664960 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wrss6_e3e80c29-b107-4969-93d7-e305e1c7eaa2/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.758710 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-hms78_cd7d361b-7311-4d32-aaae-21ba66a40d69/kube-rbac-proxy/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.800031 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-hms78_cd7d361b-7311-4d32-aaae-21ba66a40d69/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.967008 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz_82d2275a-c4c7-42a6-9027-cbbf12d0381f/manager/0.log" Dec 04 16:39:22 crc kubenswrapper[4878]: I1204 16:39:22.970256 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz_82d2275a-c4c7-42a6-9027-cbbf12d0381f/kube-rbac-proxy/0.log" Dec 04 16:39:23 crc kubenswrapper[4878]: I1204 16:39:23.369398 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5557b664dc-pw4vq_08b81a71-e15e-4321-932c-37c52be4cf74/operator/0.log" Dec 04 16:39:23 crc kubenswrapper[4878]: I1204 16:39:23.439333 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6fzqf_d52859c7-fd58-4cf6-af6f-a387abd1ea3a/registry-server/0.log" Dec 04 16:39:23 crc kubenswrapper[4878]: I1204 16:39:23.669389 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lg8ds_4b7ee068-250c-4674-8ec2-60dd5c0419be/kube-rbac-proxy/0.log" Dec 04 16:39:23 crc kubenswrapper[4878]: I1204 16:39:23.798265 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lg8ds_4b7ee068-250c-4674-8ec2-60dd5c0419be/manager/0.log" Dec 04 16:39:23 crc kubenswrapper[4878]: I1204 16:39:23.966753 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-7cmxk_828a8694-88d9-4658-909b-15188336b78b/kube-rbac-proxy/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.113188 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-7cmxk_828a8694-88d9-4658-909b-15188336b78b/manager/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.243297 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-czgfc_f4bb7917-09ae-4b2a-95c1-172ff14e5771/operator/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.367713 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-n8dqh_9e49df96-9a55-4c5c-864f-cd1aada7db7a/kube-rbac-proxy/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.411651 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-n8dqh_9e49df96-9a55-4c5c-864f-cd1aada7db7a/manager/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.449586 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f86dd88bc-blw62_c863f265-71e4-4bb2-b872-42d21f42fb5c/manager/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.540624 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-2vjxr_b89e44e5-1b68-4902-9a89-0b14489e1dfb/kube-rbac-proxy/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.656235 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-2vjxr_b89e44e5-1b68-4902-9a89-0b14489e1dfb/manager/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.683434 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lmpm5_a12e358f-da5d-409b-b9d5-a91897588e65/kube-rbac-proxy/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.729562 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lmpm5_a12e358f-da5d-409b-b9d5-a91897588e65/manager/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.855126 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cnxd4_d37a4080-1835-47f1-bad0-040bcb647c80/kube-rbac-proxy/0.log" Dec 04 16:39:24 crc kubenswrapper[4878]: I1204 16:39:24.876903 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cnxd4_d37a4080-1835-47f1-bad0-040bcb647c80/manager/0.log" Dec 04 16:39:30 crc kubenswrapper[4878]: I1204 16:39:30.840741 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:39:30 crc kubenswrapper[4878]: I1204 16:39:30.841378 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:39:44 crc kubenswrapper[4878]: I1204 16:39:44.244296 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sx899_dbfa5fb1-8fb8-41ef-805d-1034cf88853a/control-plane-machine-set-operator/0.log" Dec 04 16:39:44 crc kubenswrapper[4878]: I1204 16:39:44.431310 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7mgms_1fa17e12-0683-4fba-810b-fa1c10a2738f/kube-rbac-proxy/0.log" Dec 04 16:39:44 crc kubenswrapper[4878]: I1204 16:39:44.469443 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7mgms_1fa17e12-0683-4fba-810b-fa1c10a2738f/machine-api-operator/0.log" Dec 04 16:39:56 crc kubenswrapper[4878]: I1204 16:39:56.334019 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jq2p4_64e1efea-99bb-4630-82dc-b90418609577/cert-manager-controller/0.log" Dec 04 16:39:56 crc kubenswrapper[4878]: I1204 16:39:56.496716 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-zjf7m_e4d3c25f-014d-4d4e-aff9-291289e798f8/cert-manager-cainjector/0.log" Dec 04 16:39:56 crc kubenswrapper[4878]: I1204 16:39:56.510081 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-6w6cz_f338f7a0-f59d-4f56-8f51-e9aade039feb/cert-manager-webhook/0.log" Dec 04 16:40:00 crc kubenswrapper[4878]: I1204 16:40:00.840736 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:40:00 crc kubenswrapper[4878]: I1204 16:40:00.842007 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:40:00 crc kubenswrapper[4878]: I1204 16:40:00.842115 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:40:00 crc kubenswrapper[4878]: I1204 16:40:00.843225 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:40:00 crc kubenswrapper[4878]: I1204 16:40:00.843396 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" gracePeriod=600 Dec 04 16:40:00 crc kubenswrapper[4878]: E1204 16:40:00.970897 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:40:01 crc kubenswrapper[4878]: I1204 16:40:01.120698 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" exitCode=0 Dec 04 16:40:01 crc kubenswrapper[4878]: I1204 16:40:01.120791 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e"} Dec 04 16:40:01 crc kubenswrapper[4878]: I1204 16:40:01.120905 4878 scope.go:117] "RemoveContainer" containerID="dfa024daa6e4958b14614fe7b8387fc929c32efa6d7f00d6b5ae10c0b9cc827e" Dec 04 16:40:01 crc kubenswrapper[4878]: I1204 16:40:01.121678 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:40:01 crc kubenswrapper[4878]: E1204 16:40:01.122255 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:40:08 crc kubenswrapper[4878]: I1204 16:40:08.438721 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-k7kd2_5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d/nmstate-console-plugin/0.log" Dec 04 16:40:08 crc kubenswrapper[4878]: I1204 16:40:08.673016 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mcflh_c72c2914-f98d-4c5b-b885-16f7bcf2f793/nmstate-handler/0.log" Dec 04 16:40:08 crc kubenswrapper[4878]: I1204 16:40:08.710935 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-m2m8b_f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b/kube-rbac-proxy/0.log" Dec 04 16:40:08 crc kubenswrapper[4878]: I1204 16:40:08.760922 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-m2m8b_f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b/nmstate-metrics/0.log" Dec 04 16:40:08 crc kubenswrapper[4878]: I1204 16:40:08.911068 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-n7ffn_e20ad695-cd00-478d-9e02-662d9bceb1a5/nmstate-operator/0.log" Dec 04 16:40:09 crc kubenswrapper[4878]: I1204 16:40:09.075212 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-fxdj9_836684ff-7d25-4bdb-82ba-130f9a37da2b/nmstate-webhook/0.log" Dec 04 16:40:13 crc kubenswrapper[4878]: I1204 16:40:13.181044 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:40:13 crc kubenswrapper[4878]: E1204 16:40:13.181624 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:40:22 crc kubenswrapper[4878]: I1204 16:40:22.965904 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jrsl2_b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f/kube-rbac-proxy/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.191571 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jrsl2_b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f/controller/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.224289 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.432034 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.441159 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.553037 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.595304 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.729310 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.760469 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.789222 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:40:23 crc kubenswrapper[4878]: I1204 16:40:23.818754 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.005248 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.031959 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.040060 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.042721 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/controller/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.222280 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/frr-metrics/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.254772 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/kube-rbac-proxy-frr/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.280304 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/kube-rbac-proxy/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.471476 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/reloader/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.554418 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-xh265_f12abc8b-282b-472f-9bc9-b00c63c1d45c/frr-k8s-webhook-server/0.log" Dec 04 16:40:24 crc kubenswrapper[4878]: I1204 16:40:24.752411 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5fb6fc4594-rpm6b_2236b740-707c-4652-994a-3b5289a54cf1/manager/0.log" Dec 04 16:40:25 crc kubenswrapper[4878]: I1204 16:40:25.007894 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-759dcc4c7f-mgcfw_db6409af-e753-47ac-8370-71aedbe7208d/webhook-server/0.log" Dec 04 16:40:25 crc kubenswrapper[4878]: I1204 16:40:25.124529 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwrcm_293b50a4-7270-4560-bb54-ad9394acbf8d/kube-rbac-proxy/0.log" Dec 04 16:40:25 crc kubenswrapper[4878]: I1204 16:40:25.186155 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:40:25 crc kubenswrapper[4878]: E1204 16:40:25.186401 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:40:25 crc kubenswrapper[4878]: I1204 16:40:25.724103 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwrcm_293b50a4-7270-4560-bb54-ad9394acbf8d/speaker/0.log" Dec 04 16:40:25 crc kubenswrapper[4878]: I1204 16:40:25.769645 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/frr/0.log" Dec 04 16:40:37 crc kubenswrapper[4878]: I1204 16:40:37.187564 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:40:37 crc kubenswrapper[4878]: E1204 16:40:37.188475 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.212628 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.367770 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.400483 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.447820 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.671904 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.702111 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.881712 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:40:38 crc kubenswrapper[4878]: I1204 16:40:38.881993 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/extract/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.036840 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.075284 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.087107 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.342995 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.344115 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/extract/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.358263 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.544707 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-utilities/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.765629 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-content/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.767205 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-content/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.778037 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-utilities/0.log" Dec 04 16:40:39 crc kubenswrapper[4878]: I1204 16:40:39.991228 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-utilities/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.014206 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/extract-content/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.256511 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.274348 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-llsv6_c5a70e36-5283-4d62-9ec8-904e5b73a277/registry-server/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.486755 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.513483 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.531318 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.744183 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:40:40 crc kubenswrapper[4878]: I1204 16:40:40.747508 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.001452 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b9rwj_ea958ba8-bb58-498e-8c25-a5b8f413f3be/marketplace-operator/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.072563 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.297853 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.325413 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.427488 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.573219 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/registry-server/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.633468 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.663114 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.874131 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/registry-server/0.log" Dec 04 16:40:41 crc kubenswrapper[4878]: I1204 16:40:41.928551 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.070903 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.081847 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.096393 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.248592 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.271005 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:40:42 crc kubenswrapper[4878]: I1204 16:40:42.683412 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/registry-server/0.log" Dec 04 16:40:52 crc kubenswrapper[4878]: I1204 16:40:52.180675 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:40:52 crc kubenswrapper[4878]: E1204 16:40:52.182049 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:41:06 crc kubenswrapper[4878]: I1204 16:41:06.180362 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:41:06 crc kubenswrapper[4878]: E1204 16:41:06.181208 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:41:21 crc kubenswrapper[4878]: I1204 16:41:21.185077 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:41:21 crc kubenswrapper[4878]: E1204 16:41:21.185958 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:41:32 crc kubenswrapper[4878]: I1204 16:41:32.180413 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:41:32 crc kubenswrapper[4878]: E1204 16:41:32.181395 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:41:33 crc kubenswrapper[4878]: I1204 16:41:33.506703 4878 scope.go:117] "RemoveContainer" containerID="45e5feb05fc9b3fa1c05e5d35871abacf046d7a0c232341b8825be7c9f7c6940" Dec 04 16:41:33 crc kubenswrapper[4878]: I1204 16:41:33.554259 4878 scope.go:117] "RemoveContainer" containerID="e3daf069bab4f348c25c08889fe54a5cdaf23f4e210ffb0ab9f02b28edee47cc" Dec 04 16:41:33 crc kubenswrapper[4878]: I1204 16:41:33.608862 4878 scope.go:117] "RemoveContainer" containerID="372cdc889c7dc1b40d073ffd50760ee44bcaddedf25e25648f115432bd2bc6b0" Dec 04 16:41:47 crc kubenswrapper[4878]: I1204 16:41:47.187493 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:41:47 crc kubenswrapper[4878]: E1204 16:41:47.188294 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:42:01 crc kubenswrapper[4878]: I1204 16:42:01.179654 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:42:01 crc kubenswrapper[4878]: E1204 16:42:01.180462 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:42:16 crc kubenswrapper[4878]: I1204 16:42:16.181011 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:42:16 crc kubenswrapper[4878]: E1204 16:42:16.182308 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:42:31 crc kubenswrapper[4878]: I1204 16:42:31.180200 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:42:31 crc kubenswrapper[4878]: E1204 16:42:31.181015 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:42:39 crc kubenswrapper[4878]: I1204 16:42:39.890204 4878 generic.go:334] "Generic (PLEG): container finished" podID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerID="eed4721768688a135606a8ad6ba13a836e810d77ebe30937ce61bfe5976c369a" exitCode=0 Dec 04 16:42:39 crc kubenswrapper[4878]: I1204 16:42:39.890300 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" event={"ID":"de4f1dff-1033-43c2-96b8-8ff70fbf7f41","Type":"ContainerDied","Data":"eed4721768688a135606a8ad6ba13a836e810d77ebe30937ce61bfe5976c369a"} Dec 04 16:42:39 crc kubenswrapper[4878]: I1204 16:42:39.892102 4878 scope.go:117] "RemoveContainer" containerID="eed4721768688a135606a8ad6ba13a836e810d77ebe30937ce61bfe5976c369a" Dec 04 16:42:40 crc kubenswrapper[4878]: I1204 16:42:40.780641 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4sjg_must-gather-9q6qt_de4f1dff-1033-43c2-96b8-8ff70fbf7f41/gather/0.log" Dec 04 16:42:42 crc kubenswrapper[4878]: E1204 16:42:42.852215 4878 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.98:58268->38.102.83.98:41039: write tcp 38.102.83.98:58268->38.102.83.98:41039: write: connection reset by peer Dec 04 16:42:46 crc kubenswrapper[4878]: I1204 16:42:46.180323 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:42:46 crc kubenswrapper[4878]: E1204 16:42:46.181128 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:42:49 crc kubenswrapper[4878]: I1204 16:42:49.764018 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v4sjg/must-gather-9q6qt"] Dec 04 16:42:49 crc kubenswrapper[4878]: I1204 16:42:49.764612 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="copy" containerID="cri-o://7de39805b9628eca961445b35c865bbb9d474af01ed855cfab044acca22aaa59" gracePeriod=2 Dec 04 16:42:49 crc kubenswrapper[4878]: I1204 16:42:49.775752 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v4sjg/must-gather-9q6qt"] Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.069600 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4sjg_must-gather-9q6qt_de4f1dff-1033-43c2-96b8-8ff70fbf7f41/copy/0.log" Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.072352 4878 generic.go:334] "Generic (PLEG): container finished" podID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerID="7de39805b9628eca961445b35c865bbb9d474af01ed855cfab044acca22aaa59" exitCode=143 Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.890637 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4sjg_must-gather-9q6qt_de4f1dff-1033-43c2-96b8-8ff70fbf7f41/copy/0.log" Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.892571 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.947403 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output\") pod \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.947627 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mp8r\" (UniqueName: \"kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r\") pod \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\" (UID: \"de4f1dff-1033-43c2-96b8-8ff70fbf7f41\") " Dec 04 16:42:50 crc kubenswrapper[4878]: I1204 16:42:50.957203 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r" (OuterVolumeSpecName: "kube-api-access-2mp8r") pod "de4f1dff-1033-43c2-96b8-8ff70fbf7f41" (UID: "de4f1dff-1033-43c2-96b8-8ff70fbf7f41"). InnerVolumeSpecName "kube-api-access-2mp8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.050970 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mp8r\" (UniqueName: \"kubernetes.io/projected/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-kube-api-access-2mp8r\") on node \"crc\" DevicePath \"\"" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.082527 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v4sjg_must-gather-9q6qt_de4f1dff-1033-43c2-96b8-8ff70fbf7f41/copy/0.log" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.083421 4878 scope.go:117] "RemoveContainer" containerID="7de39805b9628eca961445b35c865bbb9d474af01ed855cfab044acca22aaa59" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.083683 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v4sjg/must-gather-9q6qt" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.103660 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "de4f1dff-1033-43c2-96b8-8ff70fbf7f41" (UID: "de4f1dff-1033-43c2-96b8-8ff70fbf7f41"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.104490 4878 scope.go:117] "RemoveContainer" containerID="eed4721768688a135606a8ad6ba13a836e810d77ebe30937ce61bfe5976c369a" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.152561 4878 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/de4f1dff-1033-43c2-96b8-8ff70fbf7f41-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 04 16:42:51 crc kubenswrapper[4878]: I1204 16:42:51.191259 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" path="/var/lib/kubelet/pods/de4f1dff-1033-43c2-96b8-8ff70fbf7f41/volumes" Dec 04 16:43:01 crc kubenswrapper[4878]: I1204 16:43:01.179389 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:43:01 crc kubenswrapper[4878]: E1204 16:43:01.181604 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:43:15 crc kubenswrapper[4878]: I1204 16:43:15.181329 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:43:15 crc kubenswrapper[4878]: E1204 16:43:15.182219 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:43:29 crc kubenswrapper[4878]: I1204 16:43:29.179917 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:43:29 crc kubenswrapper[4878]: E1204 16:43:29.181615 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:43:33 crc kubenswrapper[4878]: I1204 16:43:33.682408 4878 scope.go:117] "RemoveContainer" containerID="2e4a9a05e276317d7340614a3fb837d454269da2df4cac43a8fd5ca8bbe52116" Dec 04 16:43:40 crc kubenswrapper[4878]: I1204 16:43:40.179925 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:43:40 crc kubenswrapper[4878]: E1204 16:43:40.180657 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:43:52 crc kubenswrapper[4878]: I1204 16:43:52.179791 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:43:52 crc kubenswrapper[4878]: E1204 16:43:52.180581 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:44:03 crc kubenswrapper[4878]: I1204 16:44:03.180293 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:44:03 crc kubenswrapper[4878]: E1204 16:44:03.181294 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:44:14 crc kubenswrapper[4878]: I1204 16:44:14.179787 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:44:14 crc kubenswrapper[4878]: E1204 16:44:14.180697 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:44:26 crc kubenswrapper[4878]: I1204 16:44:26.180187 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:44:26 crc kubenswrapper[4878]: E1204 16:44:26.183196 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:44:40 crc kubenswrapper[4878]: I1204 16:44:40.179776 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:44:40 crc kubenswrapper[4878]: E1204 16:44:40.180677 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:44:51 crc kubenswrapper[4878]: I1204 16:44:51.179450 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:44:51 crc kubenswrapper[4878]: E1204 16:44:51.180672 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.188756 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t"] Dec 04 16:45:00 crc kubenswrapper[4878]: E1204 16:45:00.189895 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="copy" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.189917 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="copy" Dec 04 16:45:00 crc kubenswrapper[4878]: E1204 16:45:00.189947 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="gather" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.189955 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="gather" Dec 04 16:45:00 crc kubenswrapper[4878]: E1204 16:45:00.189968 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="223d63ae-f95f-493e-a547-bb551fcb706f" containerName="container-00" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.189978 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="223d63ae-f95f-493e-a547-bb551fcb706f" containerName="container-00" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.190242 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="gather" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.190274 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="223d63ae-f95f-493e-a547-bb551fcb706f" containerName="container-00" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.190298 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="de4f1dff-1033-43c2-96b8-8ff70fbf7f41" containerName="copy" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.191169 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.194479 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.194733 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.203335 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t"] Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.357692 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.357864 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwqvr\" (UniqueName: \"kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.358338 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.460500 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.460596 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.460698 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwqvr\" (UniqueName: \"kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.461799 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.471118 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.478170 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwqvr\" (UniqueName: \"kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr\") pod \"collect-profiles-29414445-st22t\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.519716 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:00 crc kubenswrapper[4878]: I1204 16:45:00.967342 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t"] Dec 04 16:45:00 crc kubenswrapper[4878]: W1204 16:45:00.978109 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c6a45ee_82c5_44fb_82aa_a972c50275a0.slice/crio-2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a WatchSource:0}: Error finding container 2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a: Status 404 returned error can't find the container with id 2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a Dec 04 16:45:01 crc kubenswrapper[4878]: I1204 16:45:01.369978 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" event={"ID":"4c6a45ee-82c5-44fb-82aa-a972c50275a0","Type":"ContainerStarted","Data":"958ed2c2f1172bdac8283cf9f15b08abe2d6b56749c897ebf4a7bd5fb16d76f3"} Dec 04 16:45:01 crc kubenswrapper[4878]: I1204 16:45:01.370289 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" event={"ID":"4c6a45ee-82c5-44fb-82aa-a972c50275a0","Type":"ContainerStarted","Data":"2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a"} Dec 04 16:45:01 crc kubenswrapper[4878]: I1204 16:45:01.398119 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" podStartSLOduration=1.398099116 podStartE2EDuration="1.398099116s" podCreationTimestamp="2025-12-04 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:45:01.391951722 +0000 UTC m=+4145.354488678" watchObservedRunningTime="2025-12-04 16:45:01.398099116 +0000 UTC m=+4145.360636072" Dec 04 16:45:02 crc kubenswrapper[4878]: I1204 16:45:02.380605 4878 generic.go:334] "Generic (PLEG): container finished" podID="4c6a45ee-82c5-44fb-82aa-a972c50275a0" containerID="958ed2c2f1172bdac8283cf9f15b08abe2d6b56749c897ebf4a7bd5fb16d76f3" exitCode=0 Dec 04 16:45:02 crc kubenswrapper[4878]: I1204 16:45:02.380722 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" event={"ID":"4c6a45ee-82c5-44fb-82aa-a972c50275a0","Type":"ContainerDied","Data":"958ed2c2f1172bdac8283cf9f15b08abe2d6b56749c897ebf4a7bd5fb16d76f3"} Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.784351 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.865459 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume\") pod \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.865555 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwqvr\" (UniqueName: \"kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr\") pod \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.865918 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume\") pod \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\" (UID: \"4c6a45ee-82c5-44fb-82aa-a972c50275a0\") " Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.866666 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "4c6a45ee-82c5-44fb-82aa-a972c50275a0" (UID: "4c6a45ee-82c5-44fb-82aa-a972c50275a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.872492 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4c6a45ee-82c5-44fb-82aa-a972c50275a0" (UID: "4c6a45ee-82c5-44fb-82aa-a972c50275a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.872608 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr" (OuterVolumeSpecName: "kube-api-access-cwqvr") pod "4c6a45ee-82c5-44fb-82aa-a972c50275a0" (UID: "4c6a45ee-82c5-44fb-82aa-a972c50275a0"). InnerVolumeSpecName "kube-api-access-cwqvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.968484 4878 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6a45ee-82c5-44fb-82aa-a972c50275a0-config-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.968531 4878 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4c6a45ee-82c5-44fb-82aa-a972c50275a0-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 04 16:45:03 crc kubenswrapper[4878]: I1204 16:45:03.968546 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwqvr\" (UniqueName: \"kubernetes.io/projected/4c6a45ee-82c5-44fb-82aa-a972c50275a0-kube-api-access-cwqvr\") on node \"crc\" DevicePath \"\"" Dec 04 16:45:04 crc kubenswrapper[4878]: I1204 16:45:04.398697 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" event={"ID":"4c6a45ee-82c5-44fb-82aa-a972c50275a0","Type":"ContainerDied","Data":"2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a"} Dec 04 16:45:04 crc kubenswrapper[4878]: I1204 16:45:04.398741 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2110b6701c3f3da754b6be8a267c7b5f78576d84151c77846019e3d2fdc3211a" Dec 04 16:45:04 crc kubenswrapper[4878]: I1204 16:45:04.398799 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29414445-st22t" Dec 04 16:45:04 crc kubenswrapper[4878]: I1204 16:45:04.866438 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l"] Dec 04 16:45:04 crc kubenswrapper[4878]: I1204 16:45:04.875430 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29414400-z2j4l"] Dec 04 16:45:05 crc kubenswrapper[4878]: I1204 16:45:05.190988 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533a5341-d0d2-47cc-bb45-4fbfc2562452" path="/var/lib/kubelet/pods/533a5341-d0d2-47cc-bb45-4fbfc2562452/volumes" Dec 04 16:45:06 crc kubenswrapper[4878]: I1204 16:45:06.180229 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:45:06 crc kubenswrapper[4878]: I1204 16:45:06.422746 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a"} Dec 04 16:45:33 crc kubenswrapper[4878]: I1204 16:45:33.768603 4878 scope.go:117] "RemoveContainer" containerID="68fcf586ad4511efb67ca92ad51a28f44d4568d9254d9199030e42f32104ec99" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.643820 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c5z47/must-gather-s524q"] Dec 04 16:45:36 crc kubenswrapper[4878]: E1204 16:45:36.644887 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6a45ee-82c5-44fb-82aa-a972c50275a0" containerName="collect-profiles" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.644903 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6a45ee-82c5-44fb-82aa-a972c50275a0" containerName="collect-profiles" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.645139 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6a45ee-82c5-44fb-82aa-a972c50275a0" containerName="collect-profiles" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.646326 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.648887 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c5z47"/"openshift-service-ca.crt" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.649152 4878 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c5z47"/"default-dockercfg-f4ml9" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.649374 4878 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c5z47"/"kube-root-ca.crt" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.661433 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2m2v\" (UniqueName: \"kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.661551 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.662778 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c5z47/must-gather-s524q"] Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.764787 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2m2v\" (UniqueName: \"kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.764976 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.766238 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.796958 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2m2v\" (UniqueName: \"kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v\") pod \"must-gather-s524q\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:36 crc kubenswrapper[4878]: I1204 16:45:36.976950 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:45:37 crc kubenswrapper[4878]: I1204 16:45:37.499802 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c5z47/must-gather-s524q"] Dec 04 16:45:37 crc kubenswrapper[4878]: I1204 16:45:37.726090 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/must-gather-s524q" event={"ID":"ae09604c-742e-4347-a699-9933f5dc02d1","Type":"ContainerStarted","Data":"262ab694861e58d85aff1d74f6a8d5aa15351e81746fe0943bb67885d5d5ec7b"} Dec 04 16:45:38 crc kubenswrapper[4878]: I1204 16:45:38.759112 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/must-gather-s524q" event={"ID":"ae09604c-742e-4347-a699-9933f5dc02d1","Type":"ContainerStarted","Data":"0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5"} Dec 04 16:45:38 crc kubenswrapper[4878]: I1204 16:45:38.759696 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/must-gather-s524q" event={"ID":"ae09604c-742e-4347-a699-9933f5dc02d1","Type":"ContainerStarted","Data":"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972"} Dec 04 16:45:38 crc kubenswrapper[4878]: I1204 16:45:38.779607 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c5z47/must-gather-s524q" podStartSLOduration=2.779587077 podStartE2EDuration="2.779587077s" podCreationTimestamp="2025-12-04 16:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:45:38.776184931 +0000 UTC m=+4182.738721897" watchObservedRunningTime="2025-12-04 16:45:38.779587077 +0000 UTC m=+4182.742124033" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.198890 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.207428 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.208392 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.255647 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s982\" (UniqueName: \"kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.256260 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.256437 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.358460 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.358579 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.358633 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s982\" (UniqueName: \"kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.359120 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.359161 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.383572 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s982\" (UniqueName: \"kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982\") pod \"redhat-operators-x4ckg\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:41 crc kubenswrapper[4878]: I1204 16:45:41.530447 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:45:42 crc kubenswrapper[4878]: W1204 16:45:42.097867 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ffe9952_4635_4fcc_a8cf_4db8063a175b.slice/crio-1393d56a18e23f6b8c414ac9da6affa63cde8c1933a63284484217db5e8b49ef WatchSource:0}: Error finding container 1393d56a18e23f6b8c414ac9da6affa63cde8c1933a63284484217db5e8b49ef: Status 404 returned error can't find the container with id 1393d56a18e23f6b8c414ac9da6affa63cde8c1933a63284484217db5e8b49ef Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.103249 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.454274 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c5z47/crc-debug-ldgmc"] Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.456238 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.588367 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.589181 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs597\" (UniqueName: \"kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.691885 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs597\" (UniqueName: \"kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.692068 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.692252 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.774501 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs597\" (UniqueName: \"kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597\") pod \"crc-debug-ldgmc\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.782948 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:45:42 crc kubenswrapper[4878]: I1204 16:45:42.837108 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerStarted","Data":"1393d56a18e23f6b8c414ac9da6affa63cde8c1933a63284484217db5e8b49ef"} Dec 04 16:45:43 crc kubenswrapper[4878]: I1204 16:45:43.850764 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" event={"ID":"e2911e06-23c3-44d9-a570-e4c61fe786d7","Type":"ContainerStarted","Data":"8f9b263ca64553af58a8cf6a6f4a1ac9ba295757765eb0edf8d2a5595fd6b4bf"} Dec 04 16:45:44 crc kubenswrapper[4878]: I1204 16:45:44.864628 4878 generic.go:334] "Generic (PLEG): container finished" podID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerID="4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c" exitCode=0 Dec 04 16:45:44 crc kubenswrapper[4878]: I1204 16:45:44.865124 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerDied","Data":"4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c"} Dec 04 16:45:44 crc kubenswrapper[4878]: I1204 16:45:44.868454 4878 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 04 16:45:44 crc kubenswrapper[4878]: I1204 16:45:44.874983 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" event={"ID":"e2911e06-23c3-44d9-a570-e4c61fe786d7","Type":"ContainerStarted","Data":"8c72d98444f7991d6804b2fca534472223fa7c2c526ca76829a40cbbd670c027"} Dec 04 16:45:44 crc kubenswrapper[4878]: I1204 16:45:44.920903 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" podStartSLOduration=2.920882995 podStartE2EDuration="2.920882995s" podCreationTimestamp="2025-12-04 16:45:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-04 16:45:44.906270318 +0000 UTC m=+4188.868807274" watchObservedRunningTime="2025-12-04 16:45:44.920882995 +0000 UTC m=+4188.883419951" Dec 04 16:45:45 crc kubenswrapper[4878]: I1204 16:45:45.886002 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerStarted","Data":"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c"} Dec 04 16:45:47 crc kubenswrapper[4878]: I1204 16:45:47.903639 4878 generic.go:334] "Generic (PLEG): container finished" podID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerID="486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c" exitCode=0 Dec 04 16:45:47 crc kubenswrapper[4878]: I1204 16:45:47.904076 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerDied","Data":"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c"} Dec 04 16:45:58 crc kubenswrapper[4878]: E1204 16:45:58.028907 4878 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Dec 04 16:45:58 crc kubenswrapper[4878]: E1204 16:45:58.029745 4878 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s982,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-x4ckg_openshift-marketplace(4ffe9952-4635-4fcc-a8cf-4db8063a175b): ErrImagePull: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Dec 04 16:45:58 crc kubenswrapper[4878]: E1204 16:45:58.031057 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"" pod="openshift-marketplace/redhat-operators-x4ckg" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.068836 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2dmnc"] Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.072387 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.083938 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dmnc"] Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.214262 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-utilities\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.214340 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhcdg\" (UniqueName: \"kubernetes.io/projected/53e6ef95-e627-4431-a469-2c58443aaf6e-kube-api-access-hhcdg\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.214798 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-catalog-content\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.317407 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-catalog-content\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.317555 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-utilities\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.317608 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhcdg\" (UniqueName: \"kubernetes.io/projected/53e6ef95-e627-4431-a469-2c58443aaf6e-kube-api-access-hhcdg\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.318010 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-catalog-content\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.318042 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53e6ef95-e627-4431-a469-2c58443aaf6e-utilities\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.341530 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhcdg\" (UniqueName: \"kubernetes.io/projected/53e6ef95-e627-4431-a469-2c58443aaf6e-kube-api-access-hhcdg\") pod \"certified-operators-2dmnc\" (UID: \"53e6ef95-e627-4431-a469-2c58443aaf6e\") " pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:05 crc kubenswrapper[4878]: I1204 16:46:05.412203 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:06 crc kubenswrapper[4878]: W1204 16:46:06.034668 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e6ef95_e627_4431_a469_2c58443aaf6e.slice/crio-0fc5a19c08378a73ca363cf9235fbf06566ecd0d0e5c52fb6a2a0ed1b77cb9eb WatchSource:0}: Error finding container 0fc5a19c08378a73ca363cf9235fbf06566ecd0d0e5c52fb6a2a0ed1b77cb9eb: Status 404 returned error can't find the container with id 0fc5a19c08378a73ca363cf9235fbf06566ecd0d0e5c52fb6a2a0ed1b77cb9eb Dec 04 16:46:06 crc kubenswrapper[4878]: I1204 16:46:06.047362 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dmnc"] Dec 04 16:46:06 crc kubenswrapper[4878]: I1204 16:46:06.101531 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dmnc" event={"ID":"53e6ef95-e627-4431-a469-2c58443aaf6e","Type":"ContainerStarted","Data":"0fc5a19c08378a73ca363cf9235fbf06566ecd0d0e5c52fb6a2a0ed1b77cb9eb"} Dec 04 16:46:07 crc kubenswrapper[4878]: I1204 16:46:07.111163 4878 generic.go:334] "Generic (PLEG): container finished" podID="53e6ef95-e627-4431-a469-2c58443aaf6e" containerID="8d7025da4c06316e2297eeb33dd8b7c0669758fbded67584e8f1fd1a364a0aac" exitCode=0 Dec 04 16:46:07 crc kubenswrapper[4878]: I1204 16:46:07.111275 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dmnc" event={"ID":"53e6ef95-e627-4431-a469-2c58443aaf6e","Type":"ContainerDied","Data":"8d7025da4c06316e2297eeb33dd8b7c0669758fbded67584e8f1fd1a364a0aac"} Dec 04 16:46:14 crc kubenswrapper[4878]: I1204 16:46:14.212623 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerStarted","Data":"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7"} Dec 04 16:46:14 crc kubenswrapper[4878]: I1204 16:46:14.216192 4878 generic.go:334] "Generic (PLEG): container finished" podID="53e6ef95-e627-4431-a469-2c58443aaf6e" containerID="57a7bffd73b7fbe8a35df498b3ba7d0ff25b415875a0d3a6c5b1731172967a38" exitCode=0 Dec 04 16:46:14 crc kubenswrapper[4878]: I1204 16:46:14.216236 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dmnc" event={"ID":"53e6ef95-e627-4431-a469-2c58443aaf6e","Type":"ContainerDied","Data":"57a7bffd73b7fbe8a35df498b3ba7d0ff25b415875a0d3a6c5b1731172967a38"} Dec 04 16:46:14 crc kubenswrapper[4878]: I1204 16:46:14.233741 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x4ckg" podStartSLOduration=4.988477497 podStartE2EDuration="33.233715686s" podCreationTimestamp="2025-12-04 16:45:41 +0000 UTC" firstStartedPulling="2025-12-04 16:45:44.868210952 +0000 UTC m=+4188.830747908" lastFinishedPulling="2025-12-04 16:46:13.113449141 +0000 UTC m=+4217.075986097" observedRunningTime="2025-12-04 16:46:14.232444115 +0000 UTC m=+4218.194981071" watchObservedRunningTime="2025-12-04 16:46:14.233715686 +0000 UTC m=+4218.196252652" Dec 04 16:46:15 crc kubenswrapper[4878]: I1204 16:46:15.255127 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2dmnc" event={"ID":"53e6ef95-e627-4431-a469-2c58443aaf6e","Type":"ContainerStarted","Data":"ff539cab98bc256f84141f595dcad0bfa9ef53e310ffcd0aab557f2da492037c"} Dec 04 16:46:15 crc kubenswrapper[4878]: I1204 16:46:15.289837 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2dmnc" podStartSLOduration=2.781625219 podStartE2EDuration="10.289804845s" podCreationTimestamp="2025-12-04 16:46:05 +0000 UTC" firstStartedPulling="2025-12-04 16:46:07.11350775 +0000 UTC m=+4211.076044706" lastFinishedPulling="2025-12-04 16:46:14.621687376 +0000 UTC m=+4218.584224332" observedRunningTime="2025-12-04 16:46:15.276596847 +0000 UTC m=+4219.239133813" watchObservedRunningTime="2025-12-04 16:46:15.289804845 +0000 UTC m=+4219.252341801" Dec 04 16:46:15 crc kubenswrapper[4878]: I1204 16:46:15.413094 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:15 crc kubenswrapper[4878]: I1204 16:46:15.413170 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:16 crc kubenswrapper[4878]: I1204 16:46:16.800270 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2dmnc" podUID="53e6ef95-e627-4431-a469-2c58443aaf6e" containerName="registry-server" probeResult="failure" output=< Dec 04 16:46:16 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:46:16 crc kubenswrapper[4878]: > Dec 04 16:46:21 crc kubenswrapper[4878]: I1204 16:46:21.531185 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:21 crc kubenswrapper[4878]: I1204 16:46:21.531807 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:22 crc kubenswrapper[4878]: I1204 16:46:22.586547 4878 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x4ckg" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="registry-server" probeResult="failure" output=< Dec 04 16:46:22 crc kubenswrapper[4878]: timeout: failed to connect service ":50051" within 1s Dec 04 16:46:22 crc kubenswrapper[4878]: > Dec 04 16:46:25 crc kubenswrapper[4878]: I1204 16:46:25.474159 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.212257 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2dmnc" Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.295459 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2dmnc"] Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.349320 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.349666 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llsv6" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="registry-server" containerID="cri-o://1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9" gracePeriod=2 Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.369792 4878 generic.go:334] "Generic (PLEG): container finished" podID="e2911e06-23c3-44d9-a570-e4c61fe786d7" containerID="8c72d98444f7991d6804b2fca534472223fa7c2c526ca76829a40cbbd670c027" exitCode=0 Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.370959 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" event={"ID":"e2911e06-23c3-44d9-a570-e4c61fe786d7","Type":"ContainerDied","Data":"8c72d98444f7991d6804b2fca534472223fa7c2c526ca76829a40cbbd670c027"} Dec 04 16:46:26 crc kubenswrapper[4878]: I1204 16:46:26.934343 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.027979 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities\") pod \"c5a70e36-5283-4d62-9ec8-904e5b73a277\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.028189 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content\") pod \"c5a70e36-5283-4d62-9ec8-904e5b73a277\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.028291 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49hx\" (UniqueName: \"kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx\") pod \"c5a70e36-5283-4d62-9ec8-904e5b73a277\" (UID: \"c5a70e36-5283-4d62-9ec8-904e5b73a277\") " Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.028794 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities" (OuterVolumeSpecName: "utilities") pod "c5a70e36-5283-4d62-9ec8-904e5b73a277" (UID: "c5a70e36-5283-4d62-9ec8-904e5b73a277"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.051123 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx" (OuterVolumeSpecName: "kube-api-access-p49hx") pod "c5a70e36-5283-4d62-9ec8-904e5b73a277" (UID: "c5a70e36-5283-4d62-9ec8-904e5b73a277"). InnerVolumeSpecName "kube-api-access-p49hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.092618 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c5a70e36-5283-4d62-9ec8-904e5b73a277" (UID: "c5a70e36-5283-4d62-9ec8-904e5b73a277"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.131131 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.131181 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5a70e36-5283-4d62-9ec8-904e5b73a277-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.131211 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49hx\" (UniqueName: \"kubernetes.io/projected/c5a70e36-5283-4d62-9ec8-904e5b73a277-kube-api-access-p49hx\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.385182 4878 generic.go:334] "Generic (PLEG): container finished" podID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerID="1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9" exitCode=0 Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.385485 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llsv6" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.386507 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerDied","Data":"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9"} Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.386548 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llsv6" event={"ID":"c5a70e36-5283-4d62-9ec8-904e5b73a277","Type":"ContainerDied","Data":"c5526dafe58e0f0a8d1b311d56fb5a4d42069248449dc8d0017f272810435487"} Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.386571 4878 scope.go:117] "RemoveContainer" containerID="1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.460543 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.472831 4878 scope.go:117] "RemoveContainer" containerID="520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.511120 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-ldgmc"] Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.525272 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-ldgmc"] Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.535266 4878 scope.go:117] "RemoveContainer" containerID="4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.539643 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host\") pod \"e2911e06-23c3-44d9-a570-e4c61fe786d7\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.539748 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs597\" (UniqueName: \"kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597\") pod \"e2911e06-23c3-44d9-a570-e4c61fe786d7\" (UID: \"e2911e06-23c3-44d9-a570-e4c61fe786d7\") " Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.540088 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.540088 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host" (OuterVolumeSpecName: "host") pod "e2911e06-23c3-44d9-a570-e4c61fe786d7" (UID: "e2911e06-23c3-44d9-a570-e4c61fe786d7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.540777 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e2911e06-23c3-44d9-a570-e4c61fe786d7-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.546102 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597" (OuterVolumeSpecName: "kube-api-access-xs597") pod "e2911e06-23c3-44d9-a570-e4c61fe786d7" (UID: "e2911e06-23c3-44d9-a570-e4c61fe786d7"). InnerVolumeSpecName "kube-api-access-xs597". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.575771 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llsv6"] Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.647215 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs597\" (UniqueName: \"kubernetes.io/projected/e2911e06-23c3-44d9-a570-e4c61fe786d7-kube-api-access-xs597\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.659559 4878 scope.go:117] "RemoveContainer" containerID="1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9" Dec 04 16:46:27 crc kubenswrapper[4878]: E1204 16:46:27.660263 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9\": container with ID starting with 1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9 not found: ID does not exist" containerID="1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.660300 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9"} err="failed to get container status \"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9\": rpc error: code = NotFound desc = could not find container \"1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9\": container with ID starting with 1bb17e1ecf4ac3adcf55dab9c769dc0c58da6d6f7b2362e6f7df7ae16abd46d9 not found: ID does not exist" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.660330 4878 scope.go:117] "RemoveContainer" containerID="520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f" Dec 04 16:46:27 crc kubenswrapper[4878]: E1204 16:46:27.660585 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f\": container with ID starting with 520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f not found: ID does not exist" containerID="520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.660612 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f"} err="failed to get container status \"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f\": rpc error: code = NotFound desc = could not find container \"520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f\": container with ID starting with 520d1e285e0cca1cc0931f30d281d03bd9cdb2ad494976a690acc93996f4630f not found: ID does not exist" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.660628 4878 scope.go:117] "RemoveContainer" containerID="4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca" Dec 04 16:46:27 crc kubenswrapper[4878]: E1204 16:46:27.661021 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca\": container with ID starting with 4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca not found: ID does not exist" containerID="4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca" Dec 04 16:46:27 crc kubenswrapper[4878]: I1204 16:46:27.661044 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca"} err="failed to get container status \"4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca\": rpc error: code = NotFound desc = could not find container \"4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca\": container with ID starting with 4b3d21624bdac29995960164b3accf3a043ff373e42a328cf15d9de4895956ca not found: ID does not exist" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.397772 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-ldgmc" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.397755 4878 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f9b263ca64553af58a8cf6a6f4a1ac9ba295757765eb0edf8d2a5595fd6b4bf" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.805798 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c5z47/crc-debug-p49xm"] Dec 04 16:46:28 crc kubenswrapper[4878]: E1204 16:46:28.807352 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="registry-server" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807406 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="registry-server" Dec 04 16:46:28 crc kubenswrapper[4878]: E1204 16:46:28.807436 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2911e06-23c3-44d9-a570-e4c61fe786d7" containerName="container-00" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807445 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2911e06-23c3-44d9-a570-e4c61fe786d7" containerName="container-00" Dec 04 16:46:28 crc kubenswrapper[4878]: E1204 16:46:28.807484 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="extract-content" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807494 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="extract-content" Dec 04 16:46:28 crc kubenswrapper[4878]: E1204 16:46:28.807509 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="extract-utilities" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807517 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="extract-utilities" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807796 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2911e06-23c3-44d9-a570-e4c61fe786d7" containerName="container-00" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.807826 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" containerName="registry-server" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.808922 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.975293 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8csxl\" (UniqueName: \"kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:28 crc kubenswrapper[4878]: I1204 16:46:28.975367 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.077148 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8csxl\" (UniqueName: \"kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.077208 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.077381 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.106776 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8csxl\" (UniqueName: \"kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl\") pod \"crc-debug-p49xm\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.129674 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.233933 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5a70e36-5283-4d62-9ec8-904e5b73a277" path="/var/lib/kubelet/pods/c5a70e36-5283-4d62-9ec8-904e5b73a277/volumes" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.234780 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2911e06-23c3-44d9-a570-e4c61fe786d7" path="/var/lib/kubelet/pods/e2911e06-23c3-44d9-a570-e4c61fe786d7/volumes" Dec 04 16:46:29 crc kubenswrapper[4878]: I1204 16:46:29.412165 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-p49xm" event={"ID":"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731","Type":"ContainerStarted","Data":"537f3f01219dae13e862458f921c295c9d653c16538e1985fa212794612c3729"} Dec 04 16:46:30 crc kubenswrapper[4878]: I1204 16:46:30.422894 4878 generic.go:334] "Generic (PLEG): container finished" podID="d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" containerID="eba4815424204c80116b3c1d496b2c8a36b77d233916345ad169fd3e9e188514" exitCode=0 Dec 04 16:46:30 crc kubenswrapper[4878]: I1204 16:46:30.423618 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-p49xm" event={"ID":"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731","Type":"ContainerDied","Data":"eba4815424204c80116b3c1d496b2c8a36b77d233916345ad169fd3e9e188514"} Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.008353 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-p49xm"] Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.021286 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-p49xm"] Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.561069 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.579693 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.632728 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.732737 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8csxl\" (UniqueName: \"kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl\") pod \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.733327 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host\") pod \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\" (UID: \"d59b58d0-5c29-4f5b-b93d-fcbbdf51a731\") " Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.733520 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host" (OuterVolumeSpecName: "host") pod "d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" (UID: "d59b58d0-5c29-4f5b-b93d-fcbbdf51a731"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.734100 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.748759 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl" (OuterVolumeSpecName: "kube-api-access-8csxl") pod "d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" (UID: "d59b58d0-5c29-4f5b-b93d-fcbbdf51a731"). InnerVolumeSpecName "kube-api-access-8csxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.836712 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8csxl\" (UniqueName: \"kubernetes.io/projected/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731-kube-api-access-8csxl\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:31 crc kubenswrapper[4878]: I1204 16:46:31.926669 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.304840 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c5z47/crc-debug-xplmm"] Dec 04 16:46:32 crc kubenswrapper[4878]: E1204 16:46:32.306325 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" containerName="container-00" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.306799 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" containerName="container-00" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.307230 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" containerName="container-00" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.308363 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.446853 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-p49xm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.446929 4878 scope.go:117] "RemoveContainer" containerID="eba4815424204c80116b3c1d496b2c8a36b77d233916345ad169fd3e9e188514" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.451147 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.451393 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc5r2\" (UniqueName: \"kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.553246 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.553400 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.553723 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc5r2\" (UniqueName: \"kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.575305 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc5r2\" (UniqueName: \"kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2\") pod \"crc-debug-xplmm\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: I1204 16:46:32.628358 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:32 crc kubenswrapper[4878]: W1204 16:46:32.665618 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61d4eec0_8e70_46c0_82be_214a966576d1.slice/crio-ba190e56ff6dbc1b9090144f0d9bec32514daa483e9f23b35a737ce2f67d543d WatchSource:0}: Error finding container ba190e56ff6dbc1b9090144f0d9bec32514daa483e9f23b35a737ce2f67d543d: Status 404 returned error can't find the container with id ba190e56ff6dbc1b9090144f0d9bec32514daa483e9f23b35a737ce2f67d543d Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.192680 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59b58d0-5c29-4f5b-b93d-fcbbdf51a731" path="/var/lib/kubelet/pods/d59b58d0-5c29-4f5b-b93d-fcbbdf51a731/volumes" Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.459395 4878 generic.go:334] "Generic (PLEG): container finished" podID="61d4eec0-8e70-46c0-82be-214a966576d1" containerID="39adfe1a29fc8460eda06ca4eebd678a78c3ce7050d0596360c9d367e92c698d" exitCode=0 Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.459441 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-xplmm" event={"ID":"61d4eec0-8e70-46c0-82be-214a966576d1","Type":"ContainerDied","Data":"39adfe1a29fc8460eda06ca4eebd678a78c3ce7050d0596360c9d367e92c698d"} Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.459500 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/crc-debug-xplmm" event={"ID":"61d4eec0-8e70-46c0-82be-214a966576d1","Type":"ContainerStarted","Data":"ba190e56ff6dbc1b9090144f0d9bec32514daa483e9f23b35a737ce2f67d543d"} Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.460808 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x4ckg" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="registry-server" containerID="cri-o://584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7" gracePeriod=2 Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.509626 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-xplmm"] Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.521404 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c5z47/crc-debug-xplmm"] Dec 04 16:46:33 crc kubenswrapper[4878]: I1204 16:46:33.914769 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.023354 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities\") pod \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.023487 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s982\" (UniqueName: \"kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982\") pod \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.023666 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content\") pod \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\" (UID: \"4ffe9952-4635-4fcc-a8cf-4db8063a175b\") " Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.024194 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities" (OuterVolumeSpecName: "utilities") pod "4ffe9952-4635-4fcc-a8cf-4db8063a175b" (UID: "4ffe9952-4635-4fcc-a8cf-4db8063a175b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.036676 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982" (OuterVolumeSpecName: "kube-api-access-2s982") pod "4ffe9952-4635-4fcc-a8cf-4db8063a175b" (UID: "4ffe9952-4635-4fcc-a8cf-4db8063a175b"). InnerVolumeSpecName "kube-api-access-2s982". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.126233 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.126510 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s982\" (UniqueName: \"kubernetes.io/projected/4ffe9952-4635-4fcc-a8cf-4db8063a175b-kube-api-access-2s982\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.147296 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ffe9952-4635-4fcc-a8cf-4db8063a175b" (UID: "4ffe9952-4635-4fcc-a8cf-4db8063a175b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.228595 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ffe9952-4635-4fcc-a8cf-4db8063a175b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.474823 4878 generic.go:334] "Generic (PLEG): container finished" podID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerID="584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7" exitCode=0 Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.474909 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x4ckg" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.474915 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerDied","Data":"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7"} Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.474989 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x4ckg" event={"ID":"4ffe9952-4635-4fcc-a8cf-4db8063a175b","Type":"ContainerDied","Data":"1393d56a18e23f6b8c414ac9da6affa63cde8c1933a63284484217db5e8b49ef"} Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.475022 4878 scope.go:117] "RemoveContainer" containerID="584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.557562 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.558653 4878 scope.go:117] "RemoveContainer" containerID="486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.579119 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.588743 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x4ckg"] Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.605663 4878 scope.go:117] "RemoveContainer" containerID="4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.667888 4878 scope.go:117] "RemoveContainer" containerID="584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7" Dec 04 16:46:34 crc kubenswrapper[4878]: E1204 16:46:34.668465 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7\": container with ID starting with 584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7 not found: ID does not exist" containerID="584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.668512 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7"} err="failed to get container status \"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7\": rpc error: code = NotFound desc = could not find container \"584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7\": container with ID starting with 584112da34caef3e6b39d0ea058dbd21eba9aeaed851188f7773f6ec30ddd3e7 not found: ID does not exist" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.668540 4878 scope.go:117] "RemoveContainer" containerID="486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c" Dec 04 16:46:34 crc kubenswrapper[4878]: E1204 16:46:34.669088 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c\": container with ID starting with 486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c not found: ID does not exist" containerID="486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.669156 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c"} err="failed to get container status \"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c\": rpc error: code = NotFound desc = could not find container \"486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c\": container with ID starting with 486f0c2df6a765d19240f79f56c42584ebfdb1886a5b39c75439481d844fc12c not found: ID does not exist" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.669197 4878 scope.go:117] "RemoveContainer" containerID="4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c" Dec 04 16:46:34 crc kubenswrapper[4878]: E1204 16:46:34.669647 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c\": container with ID starting with 4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c not found: ID does not exist" containerID="4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.669707 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c"} err="failed to get container status \"4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c\": rpc error: code = NotFound desc = could not find container \"4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c\": container with ID starting with 4b8f40273f1d9b258550bb366d1595fd9a80f14e90a102355dbcc272f9aaf66c not found: ID does not exist" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.738173 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc5r2\" (UniqueName: \"kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2\") pod \"61d4eec0-8e70-46c0-82be-214a966576d1\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.738312 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host\") pod \"61d4eec0-8e70-46c0-82be-214a966576d1\" (UID: \"61d4eec0-8e70-46c0-82be-214a966576d1\") " Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.738453 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host" (OuterVolumeSpecName: "host") pod "61d4eec0-8e70-46c0-82be-214a966576d1" (UID: "61d4eec0-8e70-46c0-82be-214a966576d1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.739769 4878 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/61d4eec0-8e70-46c0-82be-214a966576d1-host\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.746301 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2" (OuterVolumeSpecName: "kube-api-access-mc5r2") pod "61d4eec0-8e70-46c0-82be-214a966576d1" (UID: "61d4eec0-8e70-46c0-82be-214a966576d1"). InnerVolumeSpecName "kube-api-access-mc5r2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:46:34 crc kubenswrapper[4878]: I1204 16:46:34.842109 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc5r2\" (UniqueName: \"kubernetes.io/projected/61d4eec0-8e70-46c0-82be-214a966576d1-kube-api-access-mc5r2\") on node \"crc\" DevicePath \"\"" Dec 04 16:46:35 crc kubenswrapper[4878]: I1204 16:46:35.193538 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" path="/var/lib/kubelet/pods/4ffe9952-4635-4fcc-a8cf-4db8063a175b/volumes" Dec 04 16:46:35 crc kubenswrapper[4878]: I1204 16:46:35.195838 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d4eec0-8e70-46c0-82be-214a966576d1" path="/var/lib/kubelet/pods/61d4eec0-8e70-46c0-82be-214a966576d1/volumes" Dec 04 16:46:35 crc kubenswrapper[4878]: I1204 16:46:35.488069 4878 scope.go:117] "RemoveContainer" containerID="39adfe1a29fc8460eda06ca4eebd678a78c3ce7050d0596360c9d367e92c698d" Dec 04 16:46:35 crc kubenswrapper[4878]: I1204 16:46:35.488100 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/crc-debug-xplmm" Dec 04 16:46:59 crc kubenswrapper[4878]: I1204 16:46:59.830420 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f87cb9798-k84k9_a95965d0-357e-422a-ab31-186d9dce897b/barbican-api/0.log" Dec 04 16:46:59 crc kubenswrapper[4878]: I1204 16:46:59.928944 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5f87cb9798-k84k9_a95965d0-357e-422a-ab31-186d9dce897b/barbican-api-log/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.075910 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c7d97c576-6crcc_9a85aaed-250a-44a2-aa46-3ca586b53e2b/barbican-keystone-listener/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.155573 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-c7d97c576-6crcc_9a85aaed-250a-44a2-aa46-3ca586b53e2b/barbican-keystone-listener-log/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.270786 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5958c7964f-4fxmd_a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0/barbican-worker/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.298381 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5958c7964f-4fxmd_a8ae18da-3b0c-4cc9-8cb6-77fc6ee6c3b0/barbican-worker-log/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.507331 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-b76fl_844663ab-0b83-4d6a-9493-b8ce0743f963/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.543439 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/ceilometer-central-agent/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.687678 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/ceilometer-notification-agent/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.725416 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/sg-core/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.729351 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2bbda1b6-b67d-45f7-ba2f-1bc7ddf5dda3/proxy-httpd/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.917929 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b95fffa-975c-44f0-ae14-d0ac3bd06053/cinder-api/0.log" Dec 04 16:47:00 crc kubenswrapper[4878]: I1204 16:47:00.922205 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_1b95fffa-975c-44f0-ae14-d0ac3bd06053/cinder-api-log/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.151767 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2640581-49bc-496a-8b18-01d492ff96dc/cinder-scheduler/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.167303 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2640581-49bc-496a-8b18-01d492ff96dc/probe/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.281677 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jqwjj_992af669-26c3-4266-bf3d-023460cf30b3/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.381535 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-pp245_a0a7ed48-a6ca-45c3-9d33-2ebce62512b3/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.547357 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/init/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.664295 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/init/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.798079 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-7hfpl_56812292-222d-4323-86ad-30023b9862b0/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.810189 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-ttk4q_cb2cb23f-6f8d-43f2-a251-35f680844694/dnsmasq-dns/0.log" Dec 04 16:47:01 crc kubenswrapper[4878]: I1204 16:47:01.979092 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d695709a-c328-42c1-8193-20cca3f504bc/glance-httpd/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.021943 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_d695709a-c328-42c1-8193-20cca3f504bc/glance-log/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.192081 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_86a592b1-9417-4993-9470-f6077542c0af/glance-httpd/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.229729 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_86a592b1-9417-4993-9470-f6077542c0af/glance-log/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.366824 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c56cbf696-wj6zc_63307580-b46f-421f-bbf5-52eafde58f6c/horizon/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.519207 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s5nsn_7f2b93bf-5478-4e8d-8ffb-6f5cc53b6c94/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.806596 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-t2mdt_9e312b69-8ad2-408e-9303-bfec15db442e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:02 crc kubenswrapper[4878]: I1204 16:47:02.822480 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c56cbf696-wj6zc_63307580-b46f-421f-bbf5-52eafde58f6c/horizon-log/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.037753 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-96bf8d55-s7dcq_59f69e03-b3e6-49bf-9b26-e10703659609/keystone-api/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.073955 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29414401-99rqr_e2d9e8d0-f3a1-4a2b-8815-66bee6417b5c/keystone-cron/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.283910 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_07cca0ce-2eda-43c8-94fa-3a307883e42a/kube-state-metrics/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.351028 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-k2n55_7db5ad3f-e745-4eca-92d8-290800fe6115/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.750407 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7d556697-lmlhb_0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53/neutron-httpd/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.762217 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7c7d556697-lmlhb_0b305a1f-94fa-4f7a-8c5a-aa5d86f93a53/neutron-api/0.log" Dec 04 16:47:03 crc kubenswrapper[4878]: I1204 16:47:03.826103 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ddzts_96e5fe1c-6d27-40bd-aea8-b89c718d54c0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:04 crc kubenswrapper[4878]: I1204 16:47:04.398822 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c91204a7-98e8-4285-93f6-d6950295491c/nova-api-log/0.log" Dec 04 16:47:04 crc kubenswrapper[4878]: I1204 16:47:04.548110 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_89cdfafa-37af-4e84-8dbf-a9022767eab6/nova-cell0-conductor-conductor/0.log" Dec 04 16:47:04 crc kubenswrapper[4878]: I1204 16:47:04.806056 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_2fe85dc9-a6b6-40ec-90fd-fd5fab214c24/nova-cell1-conductor-conductor/0.log" Dec 04 16:47:04 crc kubenswrapper[4878]: I1204 16:47:04.905459 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c91204a7-98e8-4285-93f6-d6950295491c/nova-api-api/0.log" Dec 04 16:47:04 crc kubenswrapper[4878]: I1204 16:47:04.937969 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_efc8fa35-c810-4c40-8a4c-3a4fee3651ab/nova-cell1-novncproxy-novncproxy/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.076520 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-p9k2q_c5c443b7-778f-46ba-9ec4-312767ec3a27/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.491973 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b3a8cd5b-1c6a-4278-b66d-a0b0802e1546/nova-metadata-log/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.760136 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/mysql-bootstrap/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.789348 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d111affc-9e51-4123-8f21-138b844702db/nova-scheduler-scheduler/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.956267 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/mysql-bootstrap/0.log" Dec 04 16:47:05 crc kubenswrapper[4878]: I1204 16:47:05.979860 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_9d775ef8-4c79-4ce4-b5bd-9d3290fb3256/galera/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.155736 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/mysql-bootstrap/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.378358 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/mysql-bootstrap/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.391584 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_3463d397-8d58-4444-ac34-52a0597ca441/galera/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.597050 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_c3b24340-938a-4130-a002-841b398d49c5/openstackclient/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.709487 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vqhwx_63b68bea-2a97-49cb-bba4-86c730468f8d/openstack-network-exporter/0.log" Dec 04 16:47:06 crc kubenswrapper[4878]: I1204 16:47:06.846424 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server-init/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.009123 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_b3a8cd5b-1c6a-4278-b66d-a0b0802e1546/nova-metadata-metadata/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.086518 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server-init/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.099335 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovs-vswitchd/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.127082 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-cvfgn_5efdf6a5-f2e2-4839-a976-39d5104d7d83/ovsdb-server/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.330719 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-qt5xl_76972b0d-60b4-427a-83fa-69d53c8c1e64/ovn-controller/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.415454 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dqnbd_c4743038-ff21-4107-8e3c-d576536e0c3c/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.572857 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1cade2c0-4d05-4beb-9bfb-003446587673/openstack-network-exporter/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.685823 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1cade2c0-4d05-4beb-9bfb-003446587673/ovn-northd/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.814063 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab2faf38-cbc6-4141-8553-58bad8a0675f/openstack-network-exporter/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.850346 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ab2faf38-cbc6-4141-8553-58bad8a0675f/ovsdbserver-nb/0.log" Dec 04 16:47:07 crc kubenswrapper[4878]: I1204 16:47:07.949645 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7834dc6-68d7-4afb-bbcd-d247294ba85b/openstack-network-exporter/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.071009 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d7834dc6-68d7-4afb-bbcd-d247294ba85b/ovsdbserver-sb/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.293863 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5668dccb-gv79r_460bb923-1a77-4759-98cb-b6262047cc27/placement-api/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.379836 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-c5668dccb-gv79r_460bb923-1a77-4759-98cb-b6262047cc27/placement-log/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.384219 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/setup-container/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.664912 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/setup-container/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.710464 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/rabbitmq/0.log" Dec 04 16:47:08 crc kubenswrapper[4878]: I1204 16:47:08.840789 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9686eee-f63a-40e8-a8a6-fe5901d0888c/setup-container/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.115740 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7ppb4_633ccb62-7bfe-48dc-bd16-1a042f8d57f6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.121979 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/rabbitmq/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.132714 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_976d4c5a-fb7f-4f01-8d0d-527a87639c33/setup-container/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.371075 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2wlpr_fa06f0eb-73fb-4882-9dbd-bb7c4dfb11fb/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.480412 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-djkf7_5fec4d01-1d56-4db6-ac76-cb8e2b62a659/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.702262 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-b6xnw_b35793af-eea9-4355-8bd0-8a7aec7b412a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:09 crc kubenswrapper[4878]: I1204 16:47:09.765289 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-jnbpb_4831aa21-6bfc-415f-b6e1-53a350cf923b/ssh-known-hosts-edpm-deployment/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.037905 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d68fcf6bc-v5rvx_07b8e6cd-af0c-4c2d-97fb-bee728d728a8/proxy-server/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.094178 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d68fcf6bc-v5rvx_07b8e6cd-af0c-4c2d-97fb-bee728d728a8/proxy-httpd/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.241323 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-55hsg_947bbc4c-f673-433d-bc78-4411fea88516/swift-ring-rebalance/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.303690 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-reaper/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.388490 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-auditor/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.524251 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-replicator/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.547783 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/account-server/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.678419 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-replicator/0.log" Dec 04 16:47:10 crc kubenswrapper[4878]: I1204 16:47:10.872113 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-auditor/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.015307 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-server/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.094716 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-auditor/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.132996 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/container-updater/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.171154 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-expirer/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.334675 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-replicator/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.349109 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-server/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.360306 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/object-updater/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.406985 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/rsync/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.625155 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_10b4321d-097d-4ab2-8014-63c5b80e6839/swift-recon-cron/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.691136 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sfrkk_5c550a94-c515-45cc-9c92-d7b9043486ef/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.900601 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_3e394916-5de1-45b1-9e49-246be63a5689/tempest-tests-tempest-tests-runner/0.log" Dec 04 16:47:11 crc kubenswrapper[4878]: I1204 16:47:11.949025 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_186b727a-be9a-401e-9ec6-fc48097d479a/test-operator-logs-container/0.log" Dec 04 16:47:12 crc kubenswrapper[4878]: I1204 16:47:12.146332 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-d4sw2_e6b0a783-a808-4e9d-a207-6a4c56b36cd9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 04 16:47:22 crc kubenswrapper[4878]: I1204 16:47:22.411700 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_065cfa24-566e-4cb0-8827-acbc50620fee/memcached/0.log" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.359757 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:27 crc kubenswrapper[4878]: E1204 16:47:27.360575 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d4eec0-8e70-46c0-82be-214a966576d1" containerName="container-00" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.360591 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d4eec0-8e70-46c0-82be-214a966576d1" containerName="container-00" Dec 04 16:47:27 crc kubenswrapper[4878]: E1204 16:47:27.360624 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="registry-server" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.360632 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="registry-server" Dec 04 16:47:27 crc kubenswrapper[4878]: E1204 16:47:27.360647 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="extract-utilities" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.360653 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="extract-utilities" Dec 04 16:47:27 crc kubenswrapper[4878]: E1204 16:47:27.360665 4878 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="extract-content" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.360670 4878 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="extract-content" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.361921 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ffe9952-4635-4fcc-a8cf-4db8063a175b" containerName="registry-server" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.361956 4878 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d4eec0-8e70-46c0-82be-214a966576d1" containerName="container-00" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.363781 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.374422 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.488779 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.488951 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.489066 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhx5z\" (UniqueName: \"kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.556699 4878 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.559402 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.581833 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.591511 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.591681 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.591809 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhx5z\" (UniqueName: \"kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.592783 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.593088 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.618317 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhx5z\" (UniqueName: \"kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z\") pod \"community-operators-6zjtp\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.693973 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.694389 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rv5n\" (UniqueName: \"kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.694587 4878 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.695546 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.797533 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rv5n\" (UniqueName: \"kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.797637 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.797771 4878 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.798277 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.798381 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.816967 4878 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rv5n\" (UniqueName: \"kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n\") pod \"redhat-marketplace-svvzl\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:27 crc kubenswrapper[4878]: I1204 16:47:27.885074 4878 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:28 crc kubenswrapper[4878]: I1204 16:47:28.350823 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:28 crc kubenswrapper[4878]: I1204 16:47:28.512088 4878 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:28 crc kubenswrapper[4878]: W1204 16:47:28.520291 4878 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5fc9922_8a69_4c51_b827_626f1e6289ee.slice/crio-ed51287ded17bc1fa5322bbf0930653bf4fa309ab5fba1ea3ee3092603956fa1 WatchSource:0}: Error finding container ed51287ded17bc1fa5322bbf0930653bf4fa309ab5fba1ea3ee3092603956fa1: Status 404 returned error can't find the container with id ed51287ded17bc1fa5322bbf0930653bf4fa309ab5fba1ea3ee3092603956fa1 Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.032277 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" containerID="db7ea6c2ff959c78547b9446c948a6217b288169e7c83bd0f51d3b7d4919062b" exitCode=0 Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.032375 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerDied","Data":"db7ea6c2ff959c78547b9446c948a6217b288169e7c83bd0f51d3b7d4919062b"} Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.032657 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerStarted","Data":"a4e99537ea7890f69d82e2bbc3cbc70a62d7bd389460f210f5a6ef94f235bde4"} Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.035023 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5fc9922-8a69-4c51-b827-626f1e6289ee" containerID="f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea" exitCode=0 Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.035074 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerDied","Data":"f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea"} Dec 04 16:47:29 crc kubenswrapper[4878]: I1204 16:47:29.035107 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerStarted","Data":"ed51287ded17bc1fa5322bbf0930653bf4fa309ab5fba1ea3ee3092603956fa1"} Dec 04 16:47:30 crc kubenswrapper[4878]: I1204 16:47:30.841472 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:47:30 crc kubenswrapper[4878]: I1204 16:47:30.842082 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:47:31 crc kubenswrapper[4878]: I1204 16:47:31.054230 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" containerID="762bea8da99f94da28e521b105decb4ce89ecc56fd6b9582e7b6a2e4846643f3" exitCode=0 Dec 04 16:47:31 crc kubenswrapper[4878]: I1204 16:47:31.054418 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerDied","Data":"762bea8da99f94da28e521b105decb4ce89ecc56fd6b9582e7b6a2e4846643f3"} Dec 04 16:47:31 crc kubenswrapper[4878]: I1204 16:47:31.057763 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5fc9922-8a69-4c51-b827-626f1e6289ee" containerID="483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed" exitCode=0 Dec 04 16:47:31 crc kubenswrapper[4878]: I1204 16:47:31.057821 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerDied","Data":"483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed"} Dec 04 16:47:32 crc kubenswrapper[4878]: I1204 16:47:32.072980 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerStarted","Data":"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27"} Dec 04 16:47:32 crc kubenswrapper[4878]: I1204 16:47:32.076111 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerStarted","Data":"e1d96e8448b7eb03bab2d8d9d01a4f89150bc5d5c8b295bbdf7258835735f880"} Dec 04 16:47:32 crc kubenswrapper[4878]: I1204 16:47:32.097355 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-svvzl" podStartSLOduration=2.591271688 podStartE2EDuration="5.097330183s" podCreationTimestamp="2025-12-04 16:47:27 +0000 UTC" firstStartedPulling="2025-12-04 16:47:29.037824072 +0000 UTC m=+4293.000361028" lastFinishedPulling="2025-12-04 16:47:31.543882567 +0000 UTC m=+4295.506419523" observedRunningTime="2025-12-04 16:47:32.090716129 +0000 UTC m=+4296.053253085" watchObservedRunningTime="2025-12-04 16:47:32.097330183 +0000 UTC m=+4296.059867139" Dec 04 16:47:32 crc kubenswrapper[4878]: I1204 16:47:32.115299 4878 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zjtp" podStartSLOduration=2.589814273 podStartE2EDuration="5.11527962s" podCreationTimestamp="2025-12-04 16:47:27 +0000 UTC" firstStartedPulling="2025-12-04 16:47:29.034030548 +0000 UTC m=+4292.996567504" lastFinishedPulling="2025-12-04 16:47:31.559495895 +0000 UTC m=+4295.522032851" observedRunningTime="2025-12-04 16:47:32.110734977 +0000 UTC m=+4296.073271933" watchObservedRunningTime="2025-12-04 16:47:32.11527962 +0000 UTC m=+4296.077816576" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.696301 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.696974 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.805756 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.886377 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.886437 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:37 crc kubenswrapper[4878]: I1204 16:47:37.933636 4878 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:38 crc kubenswrapper[4878]: I1204 16:47:38.186408 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:38 crc kubenswrapper[4878]: I1204 16:47:38.288226 4878 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:39 crc kubenswrapper[4878]: I1204 16:47:39.645340 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.149440 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-svvzl" podUID="b5fc9922-8a69-4c51-b827-626f1e6289ee" containerName="registry-server" containerID="cri-o://3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27" gracePeriod=2 Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.616777 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.645956 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.646256 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zjtp" podUID="5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" containerName="registry-server" containerID="cri-o://e1d96e8448b7eb03bab2d8d9d01a4f89150bc5d5c8b295bbdf7258835735f880" gracePeriod=2 Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.700364 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities\") pod \"b5fc9922-8a69-4c51-b827-626f1e6289ee\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.700464 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rv5n\" (UniqueName: \"kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n\") pod \"b5fc9922-8a69-4c51-b827-626f1e6289ee\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.700537 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content\") pod \"b5fc9922-8a69-4c51-b827-626f1e6289ee\" (UID: \"b5fc9922-8a69-4c51-b827-626f1e6289ee\") " Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.701501 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities" (OuterVolumeSpecName: "utilities") pod "b5fc9922-8a69-4c51-b827-626f1e6289ee" (UID: "b5fc9922-8a69-4c51-b827-626f1e6289ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.706364 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n" (OuterVolumeSpecName: "kube-api-access-9rv5n") pod "b5fc9922-8a69-4c51-b827-626f1e6289ee" (UID: "b5fc9922-8a69-4c51-b827-626f1e6289ee"). InnerVolumeSpecName "kube-api-access-9rv5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.723711 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5fc9922-8a69-4c51-b827-626f1e6289ee" (UID: "b5fc9922-8a69-4c51-b827-626f1e6289ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.803809 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.803853 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rv5n\" (UniqueName: \"kubernetes.io/projected/b5fc9922-8a69-4c51-b827-626f1e6289ee-kube-api-access-9rv5n\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:40 crc kubenswrapper[4878]: I1204 16:47:40.803868 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9922-8a69-4c51-b827-626f1e6289ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.165295 4878 generic.go:334] "Generic (PLEG): container finished" podID="b5fc9922-8a69-4c51-b827-626f1e6289ee" containerID="3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27" exitCode=0 Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.165431 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerDied","Data":"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27"} Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.165681 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-svvzl" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.165714 4878 scope.go:117] "RemoveContainer" containerID="3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.165699 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-svvzl" event={"ID":"b5fc9922-8a69-4c51-b827-626f1e6289ee","Type":"ContainerDied","Data":"ed51287ded17bc1fa5322bbf0930653bf4fa309ab5fba1ea3ee3092603956fa1"} Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.170026 4878 generic.go:334] "Generic (PLEG): container finished" podID="5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" containerID="e1d96e8448b7eb03bab2d8d9d01a4f89150bc5d5c8b295bbdf7258835735f880" exitCode=0 Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.170061 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerDied","Data":"e1d96e8448b7eb03bab2d8d9d01a4f89150bc5d5c8b295bbdf7258835735f880"} Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.225024 4878 scope.go:117] "RemoveContainer" containerID="483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.238282 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.318088 4878 scope.go:117] "RemoveContainer" containerID="f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.326240 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-svvzl"] Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.406081 4878 scope.go:117] "RemoveContainer" containerID="3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27" Dec 04 16:47:41 crc kubenswrapper[4878]: E1204 16:47:41.415023 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27\": container with ID starting with 3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27 not found: ID does not exist" containerID="3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.415085 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27"} err="failed to get container status \"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27\": rpc error: code = NotFound desc = could not find container \"3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27\": container with ID starting with 3dce70c1cacbfcbac8118e785d75dddf5c16c638241af635d5e9d818947cdf27 not found: ID does not exist" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.415113 4878 scope.go:117] "RemoveContainer" containerID="483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed" Dec 04 16:47:41 crc kubenswrapper[4878]: E1204 16:47:41.425024 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed\": container with ID starting with 483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed not found: ID does not exist" containerID="483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.425074 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed"} err="failed to get container status \"483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed\": rpc error: code = NotFound desc = could not find container \"483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed\": container with ID starting with 483b683aaf8e6199ca6b6eef75e87c7b9ce0ce458788d5d95577508427c889ed not found: ID does not exist" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.425102 4878 scope.go:117] "RemoveContainer" containerID="f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea" Dec 04 16:47:41 crc kubenswrapper[4878]: E1204 16:47:41.426393 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea\": container with ID starting with f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea not found: ID does not exist" containerID="f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.426440 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea"} err="failed to get container status \"f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea\": rpc error: code = NotFound desc = could not find container \"f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea\": container with ID starting with f224ee042550458dc63ce71965e70843a37aa4c8534b21cf9b6c1f6d5c7c79ea not found: ID does not exist" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.678198 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.725382 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content\") pod \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.725618 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhx5z\" (UniqueName: \"kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z\") pod \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.725696 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities\") pod \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\" (UID: \"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b\") " Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.726943 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities" (OuterVolumeSpecName: "utilities") pod "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" (UID: "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.745294 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z" (OuterVolumeSpecName: "kube-api-access-rhx5z") pod "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" (UID: "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b"). InnerVolumeSpecName "kube-api-access-rhx5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.791714 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" (UID: "5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.827441 4878 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-utilities\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.827701 4878 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:41 crc kubenswrapper[4878]: I1204 16:47:41.827809 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhx5z\" (UniqueName: \"kubernetes.io/projected/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b-kube-api-access-rhx5z\") on node \"crc\" DevicePath \"\"" Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.182908 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zjtp" event={"ID":"5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b","Type":"ContainerDied","Data":"a4e99537ea7890f69d82e2bbc3cbc70a62d7bd389460f210f5a6ef94f235bde4"} Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.183281 4878 scope.go:117] "RemoveContainer" containerID="e1d96e8448b7eb03bab2d8d9d01a4f89150bc5d5c8b295bbdf7258835735f880" Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.182960 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zjtp" Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.205713 4878 scope.go:117] "RemoveContainer" containerID="762bea8da99f94da28e521b105decb4ce89ecc56fd6b9582e7b6a2e4846643f3" Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.219194 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.227725 4878 scope.go:117] "RemoveContainer" containerID="db7ea6c2ff959c78547b9446c948a6217b288169e7c83bd0f51d3b7d4919062b" Dec 04 16:47:42 crc kubenswrapper[4878]: I1204 16:47:42.230338 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zjtp"] Dec 04 16:47:43 crc kubenswrapper[4878]: I1204 16:47:43.192252 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b" path="/var/lib/kubelet/pods/5b2d6039-09a8-4e7b-bb43-fe0b4ee15c4b/volumes" Dec 04 16:47:43 crc kubenswrapper[4878]: I1204 16:47:43.193262 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5fc9922-8a69-4c51-b827-626f1e6289ee" path="/var/lib/kubelet/pods/b5fc9922-8a69-4c51-b827-626f1e6289ee/volumes" Dec 04 16:47:43 crc kubenswrapper[4878]: I1204 16:47:43.887148 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.036805 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.099965 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.106488 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.271042 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/extract/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.295792 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/util/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.323220 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_208b47576ff8bb60fc5413794e54692e2c062b682398f5624d6b2f4530lnns7_9786a318-91d9-49a6-9123-fa844e894ecc/pull/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.465482 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-tbkt6_8553cda1-13f9-4f6f-b301-0f757fbf0021/kube-rbac-proxy/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.504615 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-tbkt6_8553cda1-13f9-4f6f-b301-0f757fbf0021/manager/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.570453 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d6d6c_a4b2d922-f684-4b6f-93dc-f717d2ece304/kube-rbac-proxy/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.664541 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-d6d6c_a4b2d922-f684-4b6f-93dc-f717d2ece304/manager/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.701280 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-tm72f_69b41a1e-5d38-4364-97bf-af19372d6324/kube-rbac-proxy/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.818360 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-tm72f_69b41a1e-5d38-4364-97bf-af19372d6324/manager/0.log" Dec 04 16:47:44 crc kubenswrapper[4878]: I1204 16:47:44.913806 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4bdd7_504d742f-8fe2-4006-b94e-bea669f69743/kube-rbac-proxy/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.057012 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-4bdd7_504d742f-8fe2-4006-b94e-bea669f69743/manager/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.140890 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-ghx29_be55b657-228b-4eef-8047-1d4c2577c529/manager/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.194539 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-ghx29_be55b657-228b-4eef-8047-1d4c2577c529/kube-rbac-proxy/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.350209 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spbj8_8b665720-1363-4671-8211-b91712e627df/kube-rbac-proxy/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.466889 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-spbj8_8b665720-1363-4671-8211-b91712e627df/manager/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.577424 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-97pcj_80bb52cf-c5dd-40ef-b4bf-657d731ad9bc/kube-rbac-proxy/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.709152 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bh7x6_1c586b36-c4f0-4de4-8616-ed14769e76a1/kube-rbac-proxy/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.736414 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-97pcj_80bb52cf-c5dd-40ef-b4bf-657d731ad9bc/manager/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.845938 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-bh7x6_1c586b36-c4f0-4de4-8616-ed14769e76a1/manager/0.log" Dec 04 16:47:45 crc kubenswrapper[4878]: I1204 16:47:45.978113 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-974fj_1880d469-6774-4848-9df9-31bfd93bc699/kube-rbac-proxy/0.log" Dec 04 16:47:46 crc kubenswrapper[4878]: I1204 16:47:46.052291 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-974fj_1880d469-6774-4848-9df9-31bfd93bc699/manager/0.log" Dec 04 16:47:46 crc kubenswrapper[4878]: I1204 16:47:46.434523 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8wvf6_95fa2571-c576-4132-b55a-cb1211301ce8/kube-rbac-proxy/0.log" Dec 04 16:47:46 crc kubenswrapper[4878]: I1204 16:47:46.452702 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-8wvf6_95fa2571-c576-4132-b55a-cb1211301ce8/manager/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.132805 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nbqqp_f925d486-d890-44dc-a416-d976e8b7d188/kube-rbac-proxy/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.175072 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-cq22h_fb61b1d4-aeeb-4526-8515-4d647d61aa9e/kube-rbac-proxy/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.251201 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-nbqqp_f925d486-d890-44dc-a416-d976e8b7d188/manager/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.395761 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-cq22h_fb61b1d4-aeeb-4526-8515-4d647d61aa9e/manager/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.424710 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wrss6_e3e80c29-b107-4969-93d7-e305e1c7eaa2/kube-rbac-proxy/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.558059 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-wrss6_e3e80c29-b107-4969-93d7-e305e1c7eaa2/manager/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.687226 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-hms78_cd7d361b-7311-4d32-aaae-21ba66a40d69/manager/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.714218 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-hms78_cd7d361b-7311-4d32-aaae-21ba66a40d69/kube-rbac-proxy/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.854185 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz_82d2275a-c4c7-42a6-9027-cbbf12d0381f/kube-rbac-proxy/0.log" Dec 04 16:47:47 crc kubenswrapper[4878]: I1204 16:47:47.885449 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4qcctz_82d2275a-c4c7-42a6-9027-cbbf12d0381f/manager/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.205365 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6fzqf_d52859c7-fd58-4cf6-af6f-a387abd1ea3a/registry-server/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.272923 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5557b664dc-pw4vq_08b81a71-e15e-4321-932c-37c52be4cf74/operator/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.405341 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lg8ds_4b7ee068-250c-4674-8ec2-60dd5c0419be/kube-rbac-proxy/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.618402 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-lg8ds_4b7ee068-250c-4674-8ec2-60dd5c0419be/manager/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.666216 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-7cmxk_828a8694-88d9-4658-909b-15188336b78b/manager/0.log" Dec 04 16:47:48 crc kubenswrapper[4878]: I1204 16:47:48.709726 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-7cmxk_828a8694-88d9-4658-909b-15188336b78b/kube-rbac-proxy/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.229178 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5f86dd88bc-blw62_c863f265-71e4-4bb2-b872-42d21f42fb5c/manager/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.352999 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-czgfc_f4bb7917-09ae-4b2a-95c1-172ff14e5771/operator/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.384533 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-n8dqh_9e49df96-9a55-4c5c-864f-cd1aada7db7a/kube-rbac-proxy/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.491208 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-n8dqh_9e49df96-9a55-4c5c-864f-cd1aada7db7a/manager/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.604763 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-2vjxr_b89e44e5-1b68-4902-9a89-0b14489e1dfb/kube-rbac-proxy/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.740439 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-2vjxr_b89e44e5-1b68-4902-9a89-0b14489e1dfb/manager/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.766337 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lmpm5_a12e358f-da5d-409b-b9d5-a91897588e65/kube-rbac-proxy/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.814597 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-lmpm5_a12e358f-da5d-409b-b9d5-a91897588e65/manager/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.899390 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cnxd4_d37a4080-1835-47f1-bad0-040bcb647c80/kube-rbac-proxy/0.log" Dec 04 16:47:49 crc kubenswrapper[4878]: I1204 16:47:49.929737 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-cnxd4_d37a4080-1835-47f1-bad0-040bcb647c80/manager/0.log" Dec 04 16:48:00 crc kubenswrapper[4878]: I1204 16:48:00.841025 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:48:00 crc kubenswrapper[4878]: I1204 16:48:00.841586 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:48:11 crc kubenswrapper[4878]: I1204 16:48:11.675238 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sx899_dbfa5fb1-8fb8-41ef-805d-1034cf88853a/control-plane-machine-set-operator/0.log" Dec 04 16:48:11 crc kubenswrapper[4878]: I1204 16:48:11.755593 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7mgms_1fa17e12-0683-4fba-810b-fa1c10a2738f/kube-rbac-proxy/0.log" Dec 04 16:48:11 crc kubenswrapper[4878]: I1204 16:48:11.850458 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-7mgms_1fa17e12-0683-4fba-810b-fa1c10a2738f/machine-api-operator/0.log" Dec 04 16:48:25 crc kubenswrapper[4878]: I1204 16:48:25.238229 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-jq2p4_64e1efea-99bb-4630-82dc-b90418609577/cert-manager-controller/0.log" Dec 04 16:48:25 crc kubenswrapper[4878]: I1204 16:48:25.505486 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-6w6cz_f338f7a0-f59d-4f56-8f51-e9aade039feb/cert-manager-webhook/0.log" Dec 04 16:48:25 crc kubenswrapper[4878]: I1204 16:48:25.516322 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-zjf7m_e4d3c25f-014d-4d4e-aff9-291289e798f8/cert-manager-cainjector/0.log" Dec 04 16:48:30 crc kubenswrapper[4878]: I1204 16:48:30.841087 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:48:30 crc kubenswrapper[4878]: I1204 16:48:30.841699 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:48:30 crc kubenswrapper[4878]: I1204 16:48:30.841768 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:48:30 crc kubenswrapper[4878]: I1204 16:48:30.842804 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:48:30 crc kubenswrapper[4878]: I1204 16:48:30.842895 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a" gracePeriod=600 Dec 04 16:48:31 crc kubenswrapper[4878]: I1204 16:48:31.685344 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a" exitCode=0 Dec 04 16:48:31 crc kubenswrapper[4878]: I1204 16:48:31.685410 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a"} Dec 04 16:48:31 crc kubenswrapper[4878]: I1204 16:48:31.685689 4878 scope.go:117] "RemoveContainer" containerID="ecc45cbb3274c96cd517437879c47f558e47b09856538a505f6251b3be7bbf5e" Dec 04 16:48:32 crc kubenswrapper[4878]: I1204 16:48:32.696522 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerStarted","Data":"73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b"} Dec 04 16:48:38 crc kubenswrapper[4878]: I1204 16:48:38.840856 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-k7kd2_5a7cbed0-a13b-4a24-bf76-b4d9bdbf1b0d/nmstate-console-plugin/0.log" Dec 04 16:48:39 crc kubenswrapper[4878]: I1204 16:48:39.073344 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-mcflh_c72c2914-f98d-4c5b-b885-16f7bcf2f793/nmstate-handler/0.log" Dec 04 16:48:39 crc kubenswrapper[4878]: I1204 16:48:39.166256 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-m2m8b_f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b/nmstate-metrics/0.log" Dec 04 16:48:39 crc kubenswrapper[4878]: I1204 16:48:39.178535 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-m2m8b_f52e2477-5fd0-4f4a-9ba8-6a8bcb44e98b/kube-rbac-proxy/0.log" Dec 04 16:48:39 crc kubenswrapper[4878]: I1204 16:48:39.892237 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-n7ffn_e20ad695-cd00-478d-9e02-662d9bceb1a5/nmstate-operator/0.log" Dec 04 16:48:39 crc kubenswrapper[4878]: I1204 16:48:39.946135 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-fxdj9_836684ff-7d25-4bdb-82ba-130f9a37da2b/nmstate-webhook/0.log" Dec 04 16:48:55 crc kubenswrapper[4878]: I1204 16:48:55.700398 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jrsl2_b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f/kube-rbac-proxy/0.log" Dec 04 16:48:55 crc kubenswrapper[4878]: I1204 16:48:55.889643 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jrsl2_b2b693c8-a1e5-4c7c-b7a5-4b4cce0cfd0f/controller/0.log" Dec 04 16:48:55 crc kubenswrapper[4878]: I1204 16:48:55.993647 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.169118 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.210518 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.210862 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.225281 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.354515 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.406249 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.406712 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.420823 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.624372 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-frr-files/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.653749 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-metrics/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.668446 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/cp-reloader/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.682724 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/controller/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.802934 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/frr-metrics/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.853142 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/kube-rbac-proxy/0.log" Dec 04 16:48:56 crc kubenswrapper[4878]: I1204 16:48:56.911592 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/kube-rbac-proxy-frr/0.log" Dec 04 16:48:57 crc kubenswrapper[4878]: I1204 16:48:57.042860 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/reloader/0.log" Dec 04 16:48:57 crc kubenswrapper[4878]: I1204 16:48:57.188055 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-xh265_f12abc8b-282b-472f-9bc9-b00c63c1d45c/frr-k8s-webhook-server/0.log" Dec 04 16:48:57 crc kubenswrapper[4878]: I1204 16:48:57.394017 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5fb6fc4594-rpm6b_2236b740-707c-4652-994a-3b5289a54cf1/manager/0.log" Dec 04 16:48:57 crc kubenswrapper[4878]: I1204 16:48:57.531644 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-759dcc4c7f-mgcfw_db6409af-e753-47ac-8370-71aedbe7208d/webhook-server/0.log" Dec 04 16:48:57 crc kubenswrapper[4878]: I1204 16:48:57.727384 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwrcm_293b50a4-7270-4560-bb54-ad9394acbf8d/kube-rbac-proxy/0.log" Dec 04 16:48:58 crc kubenswrapper[4878]: I1204 16:48:58.265293 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-cwrcm_293b50a4-7270-4560-bb54-ad9394acbf8d/speaker/0.log" Dec 04 16:48:58 crc kubenswrapper[4878]: I1204 16:48:58.423004 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tt4z_8338b9f2-c79a-4232-b705-b3a21426ade6/frr/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.365533 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.592471 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.619724 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.665948 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.773788 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/pull/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.783459 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/util/0.log" Dec 04 16:49:11 crc kubenswrapper[4878]: I1204 16:49:11.815179 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212f9rl26_3f27b5b8-26d9-405a-9c12-4ce85d5fcec7/extract/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.464507 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.586008 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.606563 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.638352 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.799731 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/util/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.821569 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/pull/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.850576 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f837jrf7_1ec855d4-1744-41cf-b49d-0a75a9a3cd2a/extract/0.log" Dec 04 16:49:12 crc kubenswrapper[4878]: I1204 16:49:12.988835 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-utilities/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.190332 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-utilities/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.190414 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-content/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.234562 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-content/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.428804 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-utilities/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.458963 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/extract-content/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.616863 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2dmnc_53e6ef95-e627-4431-a469-2c58443aaf6e/registry-server/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.706442 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.868429 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.878377 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:49:13 crc kubenswrapper[4878]: I1204 16:49:13.888278 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.065656 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-utilities/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.127336 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/extract-content/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.345264 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-b9rwj_ea958ba8-bb58-498e-8c25-a5b8f413f3be/marketplace-operator/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.361628 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.625376 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.628020 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.662247 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.858887 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-utilities/0.log" Dec 04 16:49:14 crc kubenswrapper[4878]: I1204 16:49:14.971527 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/extract-content/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.089749 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-47cwt_0b883c34-95fa-4a50-912e-513bf11d581d/registry-server/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.100632 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8mmv_805ee025-4add-4f95-a2f7-64c73eccd9fa/registry-server/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.120478 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.318020 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.368234 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.368416 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.548893 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-content/0.log" Dec 04 16:49:15 crc kubenswrapper[4878]: I1204 16:49:15.608487 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/extract-utilities/0.log" Dec 04 16:49:16 crc kubenswrapper[4878]: I1204 16:49:16.057138 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-blfdc_8e4c8769-b454-4abd-b16c-42443ff77475/registry-server/0.log" Dec 04 16:51:00 crc kubenswrapper[4878]: I1204 16:51:00.840984 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:51:00 crc kubenswrapper[4878]: I1204 16:51:00.841481 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:51:01 crc kubenswrapper[4878]: I1204 16:51:01.091336 4878 generic.go:334] "Generic (PLEG): container finished" podID="ae09604c-742e-4347-a699-9933f5dc02d1" containerID="06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972" exitCode=0 Dec 04 16:51:01 crc kubenswrapper[4878]: I1204 16:51:01.091389 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c5z47/must-gather-s524q" event={"ID":"ae09604c-742e-4347-a699-9933f5dc02d1","Type":"ContainerDied","Data":"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972"} Dec 04 16:51:01 crc kubenswrapper[4878]: I1204 16:51:01.092191 4878 scope.go:117] "RemoveContainer" containerID="06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972" Dec 04 16:51:01 crc kubenswrapper[4878]: I1204 16:51:01.251495 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c5z47_must-gather-s524q_ae09604c-742e-4347-a699-9933f5dc02d1/gather/0.log" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.148701 4878 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c5z47/must-gather-s524q"] Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.149495 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c5z47/must-gather-s524q" podUID="ae09604c-742e-4347-a699-9933f5dc02d1" containerName="copy" containerID="cri-o://0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5" gracePeriod=2 Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.163253 4878 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c5z47/must-gather-s524q"] Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.591903 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c5z47_must-gather-s524q_ae09604c-742e-4347-a699-9933f5dc02d1/copy/0.log" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.592470 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.659123 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output\") pod \"ae09604c-742e-4347-a699-9933f5dc02d1\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.659331 4878 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2m2v\" (UniqueName: \"kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v\") pod \"ae09604c-742e-4347-a699-9933f5dc02d1\" (UID: \"ae09604c-742e-4347-a699-9933f5dc02d1\") " Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.666245 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v" (OuterVolumeSpecName: "kube-api-access-k2m2v") pod "ae09604c-742e-4347-a699-9933f5dc02d1" (UID: "ae09604c-742e-4347-a699-9933f5dc02d1"). InnerVolumeSpecName "kube-api-access-k2m2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.762139 4878 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2m2v\" (UniqueName: \"kubernetes.io/projected/ae09604c-742e-4347-a699-9933f5dc02d1-kube-api-access-k2m2v\") on node \"crc\" DevicePath \"\"" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.824708 4878 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ae09604c-742e-4347-a699-9933f5dc02d1" (UID: "ae09604c-742e-4347-a699-9933f5dc02d1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 04 16:51:12 crc kubenswrapper[4878]: I1204 16:51:12.866314 4878 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ae09604c-742e-4347-a699-9933f5dc02d1-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.193009 4878 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae09604c-742e-4347-a699-9933f5dc02d1" path="/var/lib/kubelet/pods/ae09604c-742e-4347-a699-9933f5dc02d1/volumes" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.202793 4878 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c5z47_must-gather-s524q_ae09604c-742e-4347-a699-9933f5dc02d1/copy/0.log" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.203419 4878 generic.go:334] "Generic (PLEG): container finished" podID="ae09604c-742e-4347-a699-9933f5dc02d1" containerID="0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5" exitCode=143 Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.203493 4878 scope.go:117] "RemoveContainer" containerID="0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.203523 4878 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c5z47/must-gather-s524q" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.228445 4878 scope.go:117] "RemoveContainer" containerID="06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.311482 4878 scope.go:117] "RemoveContainer" containerID="0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5" Dec 04 16:51:13 crc kubenswrapper[4878]: E1204 16:51:13.315799 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5\": container with ID starting with 0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5 not found: ID does not exist" containerID="0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.315864 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5"} err="failed to get container status \"0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5\": rpc error: code = NotFound desc = could not find container \"0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5\": container with ID starting with 0a3c319800a83083d1494c83fb030fe9742d4b3c9abc8865c8a60437162784d5 not found: ID does not exist" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.315914 4878 scope.go:117] "RemoveContainer" containerID="06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972" Dec 04 16:51:13 crc kubenswrapper[4878]: E1204 16:51:13.316215 4878 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972\": container with ID starting with 06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972 not found: ID does not exist" containerID="06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972" Dec 04 16:51:13 crc kubenswrapper[4878]: I1204 16:51:13.316260 4878 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972"} err="failed to get container status \"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972\": rpc error: code = NotFound desc = could not find container \"06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972\": container with ID starting with 06e189ab5cd6f417cb88349b0c9f317d4ef7615b977d48546505c14046371972 not found: ID does not exist" Dec 04 16:51:30 crc kubenswrapper[4878]: I1204 16:51:30.840108 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:51:30 crc kubenswrapper[4878]: I1204 16:51:30.840575 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:52:00 crc kubenswrapper[4878]: I1204 16:52:00.840808 4878 patch_prober.go:28] interesting pod/machine-config-daemon-xrwqw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 04 16:52:00 crc kubenswrapper[4878]: I1204 16:52:00.841365 4878 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 04 16:52:00 crc kubenswrapper[4878]: I1204 16:52:00.841421 4878 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" Dec 04 16:52:00 crc kubenswrapper[4878]: I1204 16:52:00.842075 4878 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b"} pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 04 16:52:00 crc kubenswrapper[4878]: I1204 16:52:00.842132 4878 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerName="machine-config-daemon" containerID="cri-o://73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" gracePeriod=600 Dec 04 16:52:00 crc kubenswrapper[4878]: E1204 16:52:00.965520 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:52:01 crc kubenswrapper[4878]: I1204 16:52:01.706380 4878 generic.go:334] "Generic (PLEG): container finished" podID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" exitCode=0 Dec 04 16:52:01 crc kubenswrapper[4878]: I1204 16:52:01.706424 4878 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" event={"ID":"a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92","Type":"ContainerDied","Data":"73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b"} Dec 04 16:52:01 crc kubenswrapper[4878]: I1204 16:52:01.706462 4878 scope.go:117] "RemoveContainer" containerID="1ea8cd03d7d65256f50f5110dcbea35f33456101867e7aac824b5368df48546a" Dec 04 16:52:01 crc kubenswrapper[4878]: I1204 16:52:01.707624 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:52:01 crc kubenswrapper[4878]: E1204 16:52:01.708005 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:52:13 crc kubenswrapper[4878]: I1204 16:52:13.180572 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:52:13 crc kubenswrapper[4878]: E1204 16:52:13.181500 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:52:26 crc kubenswrapper[4878]: I1204 16:52:26.179809 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:52:26 crc kubenswrapper[4878]: E1204 16:52:26.181780 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:52:34 crc kubenswrapper[4878]: I1204 16:52:34.037413 4878 scope.go:117] "RemoveContainer" containerID="8c72d98444f7991d6804b2fca534472223fa7c2c526ca76829a40cbbd670c027" Dec 04 16:52:38 crc kubenswrapper[4878]: I1204 16:52:38.180133 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:52:38 crc kubenswrapper[4878]: E1204 16:52:38.180971 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:52:50 crc kubenswrapper[4878]: I1204 16:52:50.179494 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:52:50 crc kubenswrapper[4878]: E1204 16:52:50.180245 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:53:02 crc kubenswrapper[4878]: I1204 16:53:02.179562 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:53:02 crc kubenswrapper[4878]: E1204 16:53:02.180472 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:53:14 crc kubenswrapper[4878]: I1204 16:53:14.179437 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:53:14 crc kubenswrapper[4878]: E1204 16:53:14.180385 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:53:28 crc kubenswrapper[4878]: I1204 16:53:28.180008 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:53:28 crc kubenswrapper[4878]: E1204 16:53:28.180774 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:53:39 crc kubenswrapper[4878]: I1204 16:53:39.180787 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:53:39 crc kubenswrapper[4878]: E1204 16:53:39.181608 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:53:54 crc kubenswrapper[4878]: I1204 16:53:54.179780 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:53:54 crc kubenswrapper[4878]: E1204 16:53:54.180625 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:54:05 crc kubenswrapper[4878]: I1204 16:54:05.180947 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:54:05 crc kubenswrapper[4878]: E1204 16:54:05.182811 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:54:18 crc kubenswrapper[4878]: I1204 16:54:18.179764 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:54:18 crc kubenswrapper[4878]: E1204 16:54:18.180533 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:54:30 crc kubenswrapper[4878]: I1204 16:54:30.180659 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:54:30 crc kubenswrapper[4878]: E1204 16:54:30.181478 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:54:42 crc kubenswrapper[4878]: I1204 16:54:42.180496 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:54:42 crc kubenswrapper[4878]: E1204 16:54:42.181402 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:54:53 crc kubenswrapper[4878]: I1204 16:54:53.180026 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:54:53 crc kubenswrapper[4878]: E1204 16:54:53.180971 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92" Dec 04 16:55:05 crc kubenswrapper[4878]: I1204 16:55:05.179919 4878 scope.go:117] "RemoveContainer" containerID="73dfc3c02ab77e3b1dba8ec2fdd33e622712b17811336410f5756535f5a3d73b" Dec 04 16:55:05 crc kubenswrapper[4878]: E1204 16:55:05.180786 4878 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-xrwqw_openshift-machine-config-operator(a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92)\"" pod="openshift-machine-config-operator/machine-config-daemon-xrwqw" podUID="a6a2bf0a-1c17-4fc6-af13-ee239dfc6a92"